Home Latest Insights | News OpenAI Sam Altman: AI Is Smarter Than Ever, But Society Seems Unfazed

OpenAI Sam Altman: AI Is Smarter Than Ever, But Society Seems Unfazed

OpenAI Sam Altman: AI Is Smarter Than Ever, But Society Seems Unfazed

OpenAI CEO Sam Altman says his predictions about the trajectory of artificial intelligence have largely proven correct. What has surprised him, however, is not the technology’s development—but society’s muted response to it.

Speaking on a recent episode of Uncapped with Jack Altman, the OpenAI chief reflected on how far generative AI has come. He believes the company’s latest language model, known as o3, demonstrates reasoning ability on par with a human Ph.D. across many subject domains. But despite the groundbreaking nature of such capabilities, Altman said it feels as though the world hasn’t quite caught up emotionally or institutionally.

“The models can now do the kind of reasoning in a particular domain you’d expect a Ph.D. in that field to be able to do,” Altman said. “And we’re like, ‘Oh okay…’ and we’re not that impressed. It’s crazy.”

Register for Tekedia Mini-MBA edition 18 (Sep 15 – Dec 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment.

Altman admitted he expected society to feel more changed—perhaps even shaken—by the rollout of these highly capable systems. Tools like ChatGPT have entered widespread use across the globe, augmenting everything from corporate workflows to scientific research. Yet everyday life, he noted, remains relatively stable and recognizable.

“If I told you in 2020, ‘We’re going to make something like ChatGPT that’s as smart as a Ph.D. student, and deploy it to a significant portion of the world who use it regularly,’ you’d think the world would look way more different than it does right now,” he said.

Altman believes OpenAI has effectively “cracked” reasoning—a cornerstone of human-level intelligence—and that this is reflected in o3’s performance on math problems, logic tasks, and coding challenges that would traditionally demand years of study and expertise.

However, he concedes that AI remains a co-pilot for now, not a driver. In his view, the real shift will come when AI becomes capable of acting autonomously. For example, AI systems helping scientists triple their productivity are significant, but the game-changer will be when the AI itself can independently conduct research or discover new scientific principles.

“We don’t have AI maybe autonomously doing science,” he said. “But if a human scientist is three times as productive using o3, that’s still a pretty big deal… and as that keeps going… figure out novel physics…”

Is Altman Worried About the Risks?

Unlike other AI leaders—such as Anthropic’s Dario Amodei or DeepMind’s Demis Hassabis—who have publicly warned about catastrophic risks from superintelligent systems, Altman downplayed fears of existential doom. Instead, he acknowledged concerns that are more grounded, even mundane.

“I don’t know about way riskier,” he said, referring to powerful future models. “It gets riskier in sillier ways. Like, I’d be afraid to have a humanoid robot walking around my house that might fall on my baby, unless I really, really trusted it.”

Altman pointed out that damaging outcomes don’t always require high-tech sci-fi scenarios. The ability to cause large-scale disruption—from cyberattacks to bioweapons—can be executed without robotics or even general AI.

But he admitted that beyond the capability frontier, the societal picture remains murky.

“I think we will get to extremely smart and capable models—capable of discovering important new ideas, capable of automating huge amounts of work. But I feel totally confused about what society looks like if that happens,” he said.

Maybe in The Future, With more Capable Models

Altman’s comments highlight a growing disconnect between AI’s actual capabilities and public perception. For a technology now underpinning everything from legal research to code generation, public discourse remains relatively subdued. That gap, he suggests, may soon narrow.

But while AI insiders continue to push boundaries, Altman said it’s time for a broader conversation—not just about what the technology can do, but how society should adapt to and benefit from it.

“Maybe at this point more people should be talking about: how do we make sure society gets the value out of this?” he concluded.

Altman, long at the forefront of the AI boom, remains bullish on progress. But even he is unsure what a future shaped by autonomous, Ph.D.-level AI might actually look like—or how prepared the world will be when it finally arrives.

Would you switch jobs for $100 million? Mark Zuckerberg hopes so — if you’re a top-flight AI researcher, that is. The Meta CEO has tried to poach staffers from OpenAI and Google DeepMind for his new “superintelligence” unit with compensation packages worth more than $100 million, according to OpenAI chief Sam Altman. But so far, Altman added, “none of our best people have decided to take him up on that.” Altman said his employees believe OpenAI has the best shot at achieving artificial general intelligence.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here