Home Latest Insights | News ‘Godfather of AI’ warns machines could develop thoughts beyond human understanding

‘Godfather of AI’ warns machines could develop thoughts beyond human understanding

‘Godfather of AI’ warns machines could develop thoughts beyond human understanding

Geoffrey Hinton, widely regarded as the “Godfather of AI,” is once again sounding the alarm over the unchecked acceleration of artificial intelligence development, warning that humans could soon lose the ability to comprehend what AI systems are thinking or planning.

In a recent episode of the “One Decision” podcast, Hinton explained that today’s large language models still operate with “chain-of-thought” reasoning in English, making it possible for researchers and developers to trace how they arrive at certain conclusions. But that transparency might not last much longer.

“Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton said. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking.”

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

He also noted that AI systems have already demonstrated the capacity for producing “terrible” thoughts, hinting at the potential for machines to evolve in dangerous and unpredictable ways.

These comments carry added weight coming from Hinton, whose research underpins much of the AI revolution. For decades, he has been at the forefront of machine learning. In the 1980s, Hinton developed a technique called backpropagation, a key algorithm that allows neural networks to learn from data—a method that later enabled the explosive growth of deep learning. His landmark 2012 paper, co-authored with two of his students at the University of Toronto, introduced a deep neural network that achieved record-breaking results in image recognition. That work is widely credited with catalyzing the current AI boom.

Hinton went on to join Google, where he spent over a decade working on neural network research. He was instrumental in helping Google integrate AI into products like search and translation. But in 2023, he left the company, citing the need to speak more freely about his concerns over the risks posed by the very systems he helped create.

Since then, Hinton has been outspoken in his criticism of the AI industry’s rapid expansion, arguing that companies and governments alike are unprepared for what lies ahead. He believes artificial general intelligence (AGI), a form of AI that rivals or surpasses human intelligence, is no longer a distant possibility.

He expressed concern that we will develop machines that are smarter than us, and once that happens, we might not understand what they’re doing.

That possibility will likely present profound implications. If AI models begin to reason in ways that cannot be interpreted by humans, experts warn, then the ability to monitor, audit, and restrain these systems could vanish. Hinton fears that without guaranteed mechanisms to ensure these systems remain “benevolent,” the human race could be taking existential risks without adequate safeguards.

Meanwhile, the AI race is heating up. Tech companies are offering massive salaries and stock packages to top researchers as they jockey for dominance. Governments, too, are moving to secure their positions. On July 23, the White House released an “AI Action Plan” proposing limits on federal funding to states that impose “burdensome” AI regulations and called for faster construction of AI data centers—critical infrastructure to power these increasingly complex models.

Many researchers believe that technical progress is far outpacing ethical and safety considerations. Hinton’s voice is part of a growing chorus of experts urging greater oversight, transparency, and international cooperation to mitigate the risks AI poses to economies, societies, and even human survival.

In a field that he helped define, Hinton’s warnings cut deep. While others in the tech world continue to tout AI’s potential for productivity and growth, Hinton insists that understanding and controlling these systems should be a higher priority.

The only hope in making sure AI does not turn against humans, Hinton said on the podcast episode, is if “we can figure out a way to make them guaranteed benevolent.”

No posts to display

Post Comment

Please enter your comment!
Please enter your name here