Home Latest Insights | News Google Deepmind CEO Predicts A Decade-Long Wait For AI To Match Or Surpass Human Intelligence Across All Domains

Google Deepmind CEO Predicts A Decade-Long Wait For AI To Match Or Surpass Human Intelligence Across All Domains

Google Deepmind CEO Predicts A Decade-Long Wait For AI To Match Or Surpass Human Intelligence Across All Domains

Despite rapid advancements in artificial intelligence, the development of Artificial General Intelligence (AGI)—AI that can match or surpass human intelligence across all domains—remains an elusive goal. Demis Hassabis, CEO of Google DeepMind, has projected that AGI could start emerging in the next five to ten years, a timeline that contradicts more aggressive predictions by some of his industry peers, including Elon Musk, who has warned that AI capable of surpassing human intelligence could arrive as soon as 2025.

Speaking at a briefing in DeepMind’s London offices on Monday, Hassabis defined AGI as “a system that’s able to exhibit all the complicated capabilities that humans can,” while emphasizing that today’s AI models, despite their impressive capabilities, remain far from achieving that level of intelligence.

“I think today’s systems, they’re very passive,” he said. “There’s still a lot of things they can’t do. But I think over the next five to ten years, a lot of those capabilities will start coming to the fore, and we’ll start moving towards what we call artificial general intelligence.”

Diverging Views on AGI’s Arrival

Hassabis’ measured outlook places him in contrast with other AI leaders who predict a much faster timeline for AGI. Dario Amodei, CEO of AI startup Anthropic, told CNBC in January that he expects AI systems that are “better than almost all humans at almost all tasks” to emerge in just two to three years. Cisco’s Chief Product Officer Jeetu Patel has gone even further, suggesting that meaningful evidence of AGI could appear as early as 2025.

Musk, one of the most vocal figures in the AI space, said last year that AI systems would likely surpass human intelligence by 2025, a claim that has fueled his calls for strict regulation. Musk has repeatedly warned that AI poses a serious threat to human civilization, arguing that without oversight, its rapid development could lead to catastrophic consequences.

Speaking at the World Government Summit in Dubai, Musk reiterated his concerns, saying, “AI is one of the biggest threats to the future of civilization.” He has called for regulatory bodies to oversee AI development, comparing the technology’s potential risks to those posed by nuclear weapons.

However, like Hassabis, many experts remain skeptical of the 2025 timeline. Robin Li, CEO of Chinese tech giant Baidu, has suggested that AGI is “more than 10 years away,” emphasizing that while AI is advancing quickly, it still lacks the fundamental reasoning and adaptability of human intelligence.

The Challenges of Achieving AGI

Although there is notable rapid evolution of AI systems like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, developing AGI requires overcoming several fundamental challenges.

According to Hassabis, the biggest obstacle is getting AI to truly understand and interact with the real world. While AI has excelled in structured environments, such as playing strategy games like Go or Starcraft, transferring those problem-solving abilities to the complexities of real-world decision-making remains an immense challenge.

“The question is, how fast can we generalize the planning ideas and agentic kind of behaviors, planning and reasoning, and then generalize that over to working in the real world, on top of things like world models—models that are able to understand the world around us?” Hassabis explained.

Multi-Agent AI Systems As A Possible Path to AGI

One promising avenue for AGI development is the advancement of multi-agent AI systems—networks of AI models that can collaborate, compete, and communicate to accomplish tasks. Hassabis pointed to DeepMind’s work on training AI agents to master Starcraft, a complex strategy game requiring high-level decision-making and coordination.

“We’ve done a lot of work on that with things like Starcraft in the past, where you have a society of agents, or a league of agents, and they could be competing, they could be cooperating,” he said.

Thomas Kurian, CEO of Google Cloud, added that enabling AI agents to communicate and share information with one another is a key step in developing AGI.

“Those are all elements that you need to be able to ask an agent a question, and then once you have that interface, other agents can communicate with it,” he said.

The Future of Human-Like Intelligence in AI

While AI models continue to break new ground in areas like natural language processing, image generation, and autonomous decision-making, the prospect of truly human-like intelligence remains distant. AI today excels at pattern recognition, data analysis, and performing specialized tasks, but it still struggles with abstract reasoning, intuition, and common-sense understanding—qualities that define human intelligence.

For AGI to become a reality, AI systems are expected to develop the ability to generalize knowledge across different domains, understand context deeply, and make independent decisions in unpredictable situations. While incremental progress is being made, the kind of breakthrough required for AGI is still an open question.

Hassabis’ prediction of a 10-year timeline suggests that AGI is not an imminent reality, contradicting Musk’s warnings of AI surpassing human intelligence within the next two years. The debate highlights a broader uncertainty about AI’s trajectory—whether it will steadily progress toward human-like intelligence or hit fundamental roadblocks that delay AGI indefinitely.

The Next Frontier—Artificial Super Intelligence (ASI)

While AGI remains a long-term goal, some tech leaders are already speculating about the next step: Artificial Super Intelligence (ASI), which would not only match but surpass human intelligence. However, the timeline for ASI is even more uncertain than AGI.

“No one really knows when superintelligence will happen,” Hassabis admitted.

However, analysts believe that if AGI does emerge in the next decade, it will represent one of the most significant technological shifts in human history. Unlike today’s AI systems, which are highly specialized in narrow tasks, AGI would be capable of reasoning, learning, and adapting in ways that rival human cognition. This is expected to revolutionize industries from healthcare to finance while raising profound ethical and existential questions about AI governance, safety, and control.

Musk’s push for regulation stems from concerns that, if left unchecked, AI could develop in ways that pose existential threats to humanity. However, with leading AI researchers still struggling to bridge the gap between today’s AI models and true AGI, the future of human-like intelligence in AI remains speculative.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here