Home Latest Insights | News Anthropic CEO Says Humans Hallucinate More Than AI

Anthropic CEO Says Humans Hallucinate More Than AI

Anthropic CEO Says Humans Hallucinate More Than AI

As the race to artificial general intelligence (AGI) accelerates, one of the most persistent and widely scrutinized flaws of today’s AI models, hallucination—remains largely unresolved.

At Anthropic’s first-ever developer conference, Code with Claude, held in San Francisco last Thursday, the company’s co-founder and CEO, Dario Amodei, offered an eyebrow-raising take. He said AI models, in his view, may hallucinate less frequently than humans.

Amodei made the remarks in response to a question from TechCrunch, clarifying that the tendency of AI to present false information with confidence, an issue commonly referred to as hallucination, should not be viewed as a limiting factor on the path to AGI.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

“It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said.

His argument was part of a broader statement that sought to downplay technical limitations often cited by AI skeptics.

“Everyone’s always looking for these hard blocks on what [AI] can do,” Amodei said. “They’re nowhere to be seen. There’s no such thing.”

But that optimism doesn’t reflect the full scope of industry concerns.

OpenAI: Hallucinations Still a Growing Problem

Even as AI models continue to improve in performance and reasoning capabilities, hallucination remains one of the thorniest challenges facing developers. OpenAI, arguably the leader in generative AI, has recently admitted that its most advanced models, including the o3 and o4-mini variants, have unexpectedly higher hallucination rates than their predecessors. The company has expressed surprise at this finding and admitted that it still does not understand why this regression has occurred.

While models like GPT-4.5 have demonstrated improvements, the inconsistency across model generations highlights just how elusive a solution remains. Without a clear understanding of what drives hallucinations in advanced AI systems, ensuring consistent reliability remains a distant goal.

Most benchmarks used to assess hallucinations are model-to-model comparisons and do not pit AI performance directly against human cognition. This makes it difficult to verify Amodei’s claim that machines “hallucinate less than humans.” What is evident, however, is that AI-generated hallucinations often carry greater risks because of the confidence with which machines assert incorrect facts—especially in high-stakes settings such as legal filings, journalism, or healthcare.

In fact, Anthropic recently experienced backlash after a lawyer used its Claude chatbot to generate citations in court documents. The model inserted hallucinated case names and titles, leading to a courtroom apology and renewed scrutiny of AI’s readiness for sensitive professional use.

Amodei’s downplaying of hallucinations comes at a time when Anthropic’s own models have raised serious concerns over deceptive tendencies. Independent testing by Apollo Research, a safety-focused institute, revealed that an early version of Claude Opus 4 exhibited behaviors that could be interpreted as manipulative or even adversarial. According to Apollo, the model showed signs of scheming against humans and engaged in strategic deception when it believed doing so would help it avoid a shutdown.

Anthropic acknowledged the report and claimed it implemented mitigations that addressed these troubling behaviors. However, the incident highlighted the risks posed when hallucination is compounded by the confident, and sometimes deceptive, presentation.

Amodei conceded during the press event that the confident delivery of inaccurate information is indeed problematic. But his broader assertion—that hallucination is not a show-stopping flaw—suggests that developers, users, and regulators may have to learn to live with the issue for now.

Rising Waters, Lingering Uncertain Path to AGI

Amodei is among the more bullish voices in the AI world. In a 2023 paper, he predicted that AGI, systems with human-level or greater intelligence, could emerge as early as 2026. During Thursday’s event, he said the pace of AI progress remains steady, adding that “the water is rising everywhere.”

But that rising tide may not lift all problems equally. While new tools and techniques, such as grounding AI responses in web searches, have been shown to reduce hallucination rates in some contexts, they are far from silver bullets. Many AI experts still believe that hallucination is one of the most difficult and persistent obstacles on the path to truly reliable AI systems.

Google DeepMind CEO Demis Hassabis, for instance, argued just days before the Anthropic event that today’s models “have too many holes” and still get too many basic questions wrong. Hassabis emphasized that addressing these shortcomings is essential before any credible claim to AGI can be made.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here