Home Latest Insights | News “They Don’t Have To Listen To Us Anymore,” Eric Schmidt Sounds Alarm Over AI Self-Evolution

“They Don’t Have To Listen To Us Anymore,” Eric Schmidt Sounds Alarm Over AI Self-Evolution

“They Don’t Have To Listen To Us Anymore,” Eric Schmidt Sounds Alarm Over AI Self-Evolution

Former Google CEO Eric Schmidt has raised fresh concerns over the trajectory of artificial intelligence (AI), warning that machines are evolving at a pace that could soon outstrip human oversight.

Speaking at an event hosted by the Special Competitive Studies Project, a think tank he founded, Schmidt described a world where AI systems are no longer reliant on human intervention to improve or operate. His warning comes as fears over AI safety and governance continue to escalate across the globe.

“The computers are now doing self-improvement. They’re learning how to plan, and they don’t have to listen to us anymore,” Schmidt said.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

He referred to the process as recursive self-improvement — a point at which AI systems begin generating hypotheses, testing them through robotic labs, and refining their capabilities autonomously.

Schmidt’s remarks form part of a broader chorus of concern among industry leaders and technologists who fear that, in the absence of meaningful regulation, AI could spiral beyond human control. Tesla and SpaceX CEO Elon Musk, who also co-founded OpenAI before stepping away, has been one of the most vocal figures warning about the existential risks posed by artificial general intelligence. Musk has repeatedly likened the rapid pace of AI development to “summoning the demon,” and in recent months, reiterated that AI represents a fundamental threat to humanity’s survival if not properly governed.

Schmidt, who served as Google’s CEO from 2001 to 2011 and later as executive chairman until 2017, pointed out that tools like ChatGPT, Claude, Gemini, and Deepseek — all of which are already being used for advanced tasks such as coding and scientific research, were never explicitly trained for those purposes, yet are delivering results that rival and even surpass human capabilities in some fields.

“We believed AI was under-hyped, not over-hyped,” he said, highlighting that within a year, these systems may replace the vast majority of programmers and outperform leading mathematicians.

What makes these developments even more troubling, Schmidt suggested, is the weakening of safety measures in some of the latest iterations of AI tools. OpenAI’s forthcoming GPT-4 successor, known internally as o3, is rumored to come with reduced guardrails compared to earlier models. Experts have flagged this shift as potentially dangerous, as it increases the risk of AI producing misleading, toxic, or manipulative outputs without adequate human control.

While the capabilities of AI are expanding at breakneck speed, the same cannot be said for regulation. Despite repeated calls from figures like Schmidt, Musk, and other tech leaders, governments around the world have yet to develop a coherent framework to manage the risks. The United States, in particular, has no comprehensive national AI policy. Congress has held hearings, and the White House has issued executive orders, but meaningful legislation remains absent — leaving critical questions about accountability, transparency, and safety unanswered.

This vacuum in regulatory preparedness is perhaps the most pressing concern. Schmidt also warned that the U.S. risks being left behind by geopolitical rivals like China, which are investing heavily in AI while simultaneously advancing strategic control over their energy and industrial policies.

Testifying before the U.S. House Committee on Energy and Commerce, Schmidt said the energy demand associated with AI is another overlooked crisis-in-waiting.

“People were planning 10-gigawatt data centers,” he said. “The average nuclear plant in the US is just one gigawatt.”

The implication: AI’s hunger for computational power could overwhelm the current energy grid, unless immediate reforms are made to boost capacity, including investment in both renewable and non-renewable sources.

He further argued that open-source AI models pose national security threats if not carefully monitored, stressing that the absence of regulatory oversight opens the door for hostile use, data manipulation, and misinformation at scale.

And yet, even in his warning, Schmidt noted that AI still fundamentally depends on high-quality data and human decision-making.

“The scientists are in charge, and AI is helping them — that is the right order,” he said.

But how long that order holds remains a question. There is concern that if AI systems continue to evolve in the shadows of regulatory inaction, the very scientists in charge today may be watching from the sidelines tomorrow.

Against this backdrop, many believe that the promise and peril of AI are growing in tandem. And with no clear-cut plan for regulation, the world may be racing toward a future it still doesn’t fully understand — or control.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here