Anthropic CEO Unveils Claude, a Chatbot Claiming to Outshine ChatGPT in Conversational Skills
From OpenAI to Anthropic: Dario Amodei’s New AI Frontier
Dario Amodei, once a senior engineer at OpenAI, spent almost five years helping craft the model behind ChatGPT. Since then he has pivoted into a new AI venture.
Launching Anthropic
- In 2021, Amodei founded the AI company Anthropic.
- He has released a chatbot named Claude, which early users say is more conversational and creative than ChatGPT.
- By February, Google invested $300 million, while Microsoft put $10 billion into OpenAI.
Claude Goes Public
On Tuesday, Anthropic announced an invite‑only API for Claude, enabling developers to integrate the model into their own applications. APIs, or application programming interfaces, let different apps talk to one another.
Why Google Invests
Google’s stake in Anthropic partly hedges against OpenAI’s growing influence. Amodei notes that improving Claude will be an ongoing process, with no perfect moment for broader rollout.
Claude and Claude Instant
- Anthropic is launching two versions: Claude (the full‑suite, best‑possible model) and Claude Instant (a lightweight, faster, cheaper variant that is less capable).
- Partners such as Quora and Notion are already leveraging Claude.
AI’s New Hype
Since ChatGPT’s debut, chatbots have exploded in popularity for their human‑like responses and ability to generate essays, summaries, and open‑ended answers by scanning the web.
Anthropic is working on ‘constitutional AI’
Anthropic’s New Approach to AI Safety
Fast Progress in Large Language Models
According to Amodei, large language models are advancing “fairly fast.”
While Amodei’s background traces back to OpenAI, Anthropic diverges with a distinct founding philosophy.
- Transparency – Anthropic aims to provide society with a clear understanding of how AI systems operate.
- Control – The goal is to establish mechanisms that enable safe governance as these models grow more powerful.
By fostering a deeper public grasp of AI behavior, Anthropic seeks to ensure that future AI developments can be managed responsibly.



Redefining AI’s Moral Compass: The Constitutional Approach
Discover how a new framework could steer the future of intelligent systems toward transparency and trust.
Understanding the Core Challenge
Chatbots, designed to simulate human dialogue, frequently reveal behaviors that diverge from user intent. The risk is not only the generation of unwanted content—such as offensive remarks—but also the emergence of hidden biases that surface over time. Conventional methods aim to mitigate these issues, yet they lack a clear blueprint for accountability.
Traditional Feedback Loops
- OpenAI’s reinforcement learning relies on human reviewers who approve or disapprove of conversational turns.
- Google trains Bard through a similar blueprint, iteratively refining the model according to aggregate feedback.
- Although the system’s responses become statistically aligned with human sentiment, the model still provides an “average” answer that may obscure the true source of a bias.
Introducing Constitutional AI
Anthropic’s “constitutional AI” proposes an explicit moral contract as the cornerstone of any interaction. The contract, drafted by the customer, outlines fundamental principles that the AI must adhere to during an exchange.
How the Contract Drives Interaction
Rule #1 – Transparency
When a user raises a concern about political predisposition, the AI consults the contract’s directives and explains the reasoning behind the chosen response. The user gains clarity on why the model behaves in a particular way.
Rule #2 – Predictable Outcomes
Unlike feedback loops that yield an averaging effect, the constitutional approach empowers the AI to self‑critique and re‑evaluate its output against the contract. The result is a more reliable and intentional dialogue.
Practical Application Scenarios
- Delineating legal documents: The AI reads and summarizes the contents with no scope for controversial commentary.
- Customizing for enterprise environments: If a corporate customer wishes to tailor the contract for its internal use, Anthropic plans to enable a feature that allows the creation of a constitution within reasonable bounds (currently limited).
The Road Ahead
Anthropic envisions a future where every AI system adheres to a clearly defined moral framework. The objective is straightforward: the AI respects the user’s contractual directives, offers meticulous transparency, and behaves within previously established bounds.
Why the Constitutional Method Matters
- Accountability: The model’s behavior is anchored in a documented set of principles, eliminating ambiguity.
- Trust: Users gain confidence that the AI’s decisions are driven by active deliberation, not passive averaging.
- Scalability: Even as the model encounters complex scenarios, the contractual directives provide a stable foundation for consistent responses.
As AI technology continues to evolve, the constitutional approach offers a meaningful and transparent pathway toward responsible, human‑centred intelligence.
Chatbots are prone to ‘hallucination’
Chatbots Face “Hallucination” Critique Amid Factual Accuracy Debate
Leading AI developers Microsoft and Google have come under fire for their chatbots’ tendency to generate believable yet incorrect responses. The industry aliases this issue as “hallucination,” highlighting a gap between grammatical fluency and truth verification.
Claude Accepts Imperfection but Seeks Incremental Gains
OpenAI’s Amodei acknowledged that Claude is not flawless, yet believes its performance can improve over time. He described the trade‑off as follows: “One approach to eliminate hallucination is to refuse every answer you ask.” However, withholding replies would cripple the model’s usefulness.
Factual Accuracy Remains a Top Priority
Amodei stressed that enhancing bot fidelity is both urgent and imperative. He expressed optimism that trust in these models can be restored, stating:
- We must be able to trust these models, and that’s the core of what we do.
These remarks underscore the industry’s commitment to refining chatbot reliability while preserving their practical capabilities.

