Anthropic CEO Claims AI Hallucinates Less Than Humans, Stirs Debate on AGI Progress
Anthropic CEO Dario Amodei claims AI hallucinates less than humans, arguing that hallucinations aren't a barrier to AGI—sparking debate among experts and users over the reliability of current AI systems.
**SAN FRANCISCO, May 23 (Techcept)** – The CEO of artificial intelligence startup Anthropic, Dario Amodei, said on Wednesday that large AI models may hallucinate less frequently than humans, adding to the growing debate around the limitations and potential of generative AI systems.
Speaking at Anthropic’s inaugural developer conference, *Code with Claude*, Amodei argued that while hallucinations—instances where AI generates false or fabricated information—are a valid concern, they are not a fundamental barrier to the development of Artificial General Intelligence (AGI), a level of AI capability comparable to or surpassing human cognition.
“How hallucination is measured matters,” Amodei told attendees in San Francisco. “But compared to humans, AI likely hallucinates less frequently—even if its errors can be more surprising.”
The remarks come amid heightened scrutiny of generative AI tools, which are increasingly being integrated into business operations, legal workflows, and everyday consumer products. Critics warn that the confident tone in which AI systems deliver inaccurate results can mislead users and create real-world harm.
**Divided Opinions Among AI Leaders**
Amodei’s comments drew a contrast with more cautious views within the AI research community. Demis Hassabis, CEO of Google DeepMind, has repeatedly cited hallucinations as a persistent issue limiting the reliability and trustworthiness of current models.
Real-world examples underscore this concern. In one widely reported case, a lawyer cited fictitious case law in a legal filing generated by an AI tool, prompting an official apology and raising questions about AI's readiness for professional applications.
“Even when asked simple questions, AI models can produce confidently incorrect answers,” Hassabis said in a previous interview. “That undermines user trust.”
**Challenges in Measuring Hallucinations**
There is no standardized method for comparing hallucination rates between humans and AI, making Amodei’s claim difficult to independently verify. Most evaluations benchmark AI models against one another, rather than against human baselines.
Some strategies, such as allowing AI systems to consult the web before responding, have shown promise in reducing error rates. However, recent advancements in reasoning models have paradoxically led to increased hallucinations, a phenomenon researchers have yet to fully understand.
Amodei acknowledged that humans—including professionals, politicians, and broadcasters—routinely make factual errors. He contended that the ability to hallucinate is not a sign of flawed intelligence but rather a shared cognitive trait.
**Mixed Reactions and a Continuing Debate**
Public reaction to Amodei’s comments has been mixed. Some AI developers and users expressed agreement, while others voiced skepticism on social media, citing continued dependence on human oversight and fact-checking.
“Even if hallucinations are less frequent, the impact of a wrong answer delivered with absolute certainty is far greater when it comes from AI,” said an AI ethics researcher on X, formerly Twitter.
Anthropic, founded by former OpenAI researchers, is positioning itself as a key player in the AI space. Its Claude model competes with offerings from OpenAI, Google, and others in a rapidly evolving landscape.
Despite differing views on hallucinations, Amodei maintained that the field is advancing at an accelerating pace. “The water is rising everywhere,” he said, expressing confidence in continued progress toward AGI.