Home Latest Insights | News The Limits of Scaling: AI Researchers Grow Skeptical About AGI As Predictions Fail to Materialize

The Limits of Scaling: AI Researchers Grow Skeptical About AGI As Predictions Fail to Materialize

The Limits of Scaling: AI Researchers Grow Skeptical About AGI As Predictions Fail to Materialize

For years, tech leaders have assured the world that artificial general intelligence (AGI)—AI that matches or surpasses human cognition—was just around the corner. But despite unprecedented investments and a relentless push to scale AI models, AGI remains elusive.

A new survey of 475 AI researchers, conducted by the Association for the Advancement of Artificial Intelligence (AAAI), reveals a growing consensus: simply throwing more computing power at AI is unlikely to lead to AGI.

The findings challenge a widely held assumption among major AI players, who have spent the last decade racing to build larger and more complex AI models in the hope that one would eventually “crack” intelligence. However, 76% of surveyed researchers now believe that scaling up existing models is “unlikely” or “very unlikely” to lead to AGI.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

This growing skepticism comes as the self-imposed timeline for AGI by industry leaders is beginning to elapse—with little to show for it.

Industry Predictions Are Falling Apart

In 2014, Elon Musk famously claimed that AI could surpass human intelligence within a decade. By 2017, he doubled down, warning that humanity was “summoning the demon” and that superintelligent AI could arrive by 2025. His aggressive predictions have been echoed by other tech figures, including OpenAI’s CEO Sam Altman, who has frequently suggested that AGI could emerge in the near future.

Yet, despite billions in investment and the development of AI models with trillions of parameters, AGI has yet to materialize. Instead, the most advanced AI models, like OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude, are still limited to pattern recognition and language prediction. They remain far from demonstrating genuine reasoning, problem-solving, or self-awareness—the hallmarks of true AGI.

Even Demis Hassabis, CEO of Google DeepMind, has struck a more cautious tone. In contrast to Musk’s warnings, Hassabis estimates that AGI might emerge within the next five to ten years—a far more reserved prediction than those made by some of his peers. But even this timeline remains speculative, as researchers increasingly argue that fundamental breakthroughs will be needed before AGI becomes a reality.

Why More Computing Power Isn’t Enough

The last few years have seen unprecedented investment in AI research. In 2023, venture capital funding for generative AI exceeded $56 billion, according to TechCrunch, and demand for AI accelerators contributed to the semiconductor industry hitting a record $626 billion in 2024. Companies like Microsoft, Google, and Amazon have gone as far as securing nuclear power deals just to meet the energy demands of training and running these massive AI models.

Yet, despite these investments, AI progress appears to be hitting a wall. OpenAI’s latest models, while introducing new capabilities, have shown only marginal improvements over their predecessors. AI researcher Stuart Russell of UC Berkeley described the situation to New Scientist.

“The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced,” he said.

This problem has become more apparent as models hit performance plateaus. In the past, expanding model sizes from 10 billion to 100 billion parameters yielded substantial improvements. But more recent expansions—from 100 billion to 1 trillion parameters—have produced diminishing returns. This has led many in the AI research community to question whether scaling alone will ever be enough to reach AGI.

A Shift in AI Research Priorities

As skepticism grows, AI researchers are rethinking their priorities. Many are now shifting their focus from simply building larger AI systems to ensuring these systems operate within an acceptable risk-benefit profile. While AGI remains an area of interest, only a small fraction of researchers are actively pursuing its development.

There is also growing support for the idea that if AGI is developed by private companies, it should be publicly owned to mitigate risks. Despite concerns about safety, a majority of researchers believe that AGI research should continue, rather than be halted until full safety mechanisms are in place.

Some researchers are exploring alternative approaches to scaling, such as “test-time compute.” Instead of blindly increasing model size, this method allows AI to spend more time “thinking” before generating responses, leading to performance improvements without a massive surge in computing power. OpenAI has experimented with this approach, but Arvind Narayanan, a computer scientist at Princeton University, warns that it is “unlikely to be a silver bullet.”

Tech CEOs Still Cling to the Scaling Dream

Some industry leaders refuse to abandon the belief that scaling alone will lead to AGI. Google CEO Sundar Pichai recently stated that the industry can “just keep scaling up”, though he acknowledged that the era of easy AI breakthroughs is coming to an end.

This divide between corporate optimism and academic skepticism raises the question: is AGI truly within reach, or has the industry been chasing a mirage?

Presently, AI is advancing, but not in the way that was promised. The lofty predictions of AGI arriving by 2025 are looking increasingly unlikely, and even the more cautious estimates—such as Hassabis’ 5-10-year timeline—remain speculative. Against this backdrop, the belief that AGI is not for the near future is gaining traction.

As scaling shows diminishing returns and researchers shift toward more cautious, risk-aware development, the AI industry may soon be forced to rethink its entire approach. If AGI is to be achieved, it will likely require new paradigms—not just bigger models and more data.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here