Home Latest Insights | News He Can’t Code: New Yorker Exposé Reignites Questions Over Altman’s Technical Authority and OpenAI’s Power Structure

He Can’t Code: New Yorker Exposé Reignites Questions Over Altman’s Technical Authority and OpenAI’s Power Structure

He Can’t Code: New Yorker Exposé Reignites Questions Over Altman’s Technical Authority and OpenAI’s Power Structure

A sweeping New Yorker investigation into OpenAI chief executive Sam Altman has reopened one of the most consequential questions in the artificial intelligence industry: whether Sam Altman, the man who has become synonymous with the AI boom, can be trusted with the extraordinary power now concentrated around OpenAI.

The report, based on more than 100 interviews and previously undisclosed internal documents, offers the most detailed account yet of the doubts that led to Altman’s dramatic ouster and rapid reinstatement in 2023.

Far from the polished public image of Altman as a visionary steward of the AI era, the investigation paints a more troubling portrait, one that goes beyond questions of personality, his inability to write code, and cuts into the governance architecture of one of the world’s most strategically important companies.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

The report is largely focused on the secret memos compiled by OpenAI cofounder and former chief scientist Ilya Sutskever, which were circulated among board members in the lead-up to Altman’s removal.

One memo reportedly begins with the line: “Sam exhibits a consistent pattern of …” with the first item listed as “Lying.” That allegation is notable not simply because it concerns Altman personally, but because OpenAI’s original structure was explicitly built around trust.

Unlike conventional Silicon Valley startups, OpenAI was founded as a nonprofit with a board whose legal duty was to prioritize the safety of humanity over commercial success. The CEO, in that framework, was never meant to be just a growth executive. He was meant to be a custodian of potentially civilization-shaping technology.

That is why the report’s most important insight is not whether Altman can code.

The more consequential issue is whether institutional safeguards around him have weakened as OpenAI’s commercial and political influence has expanded. The article suggests that several insiders questioned whether Altman’s technical authority has been overstated.

Former colleagues reportedly said he lacks deep experience in programming and machine learning and has at times misused basic technical terminology in discussions.

In isolation, that may not be disqualifying because many of the most powerful technology CEOs are not the lead engineers behind their products. But in Altman’s case, the technical mythology has played an important strategic role.

He has increasingly been positioned in public discourse as an AI statesman, someone whose pronouncements on superintelligence, economic transformation, and national security carry unusual weight in Washington and global capitals.

Just days before the exposé, OpenAI released a broad industrial policy paper calling for a new economic framework for the intelligence age, including proposals on taxation, public wealth funds, and the future of work.

This juxtaposition has become the focal point because Altman is helping shape the public policy response to AI disruption, while the New Yorker investigation raises questions about whether the internal governance of his own company has kept pace with the scale of that influence.

In many ways, this story has been interpreted to be about power concentration since OpenAI is no longer merely a research lab or even a fast-growing technology company. The ChatGPTmaker is now a central actor in global infrastructure build-out, defense contracting, cloud partnerships, education tools, enterprise software, and increasingly state-level policy discussions.

The company is reportedly preparing for a public offering that could approach a $1 trillion valuation. That scale means concerns around executive accountability take on systemic importance.

The article also revisits the 2023 boardroom crisis with new details. According to the reporting, when Altman was fired, his allies among investors and senior executives quickly mobilized to reverse the decision.

One line from the report stands out: investor Josh Kushner reportedly said, “We just immediately went to war.”

That episode exposed a deeper tension within OpenAI’s structure. The nonprofit board may have had formal authority, but the economic ecosystem surrounding the company, investors, employees with large equity stakes, and strategic partners such as Microsoft, had enormous practical leverage.

In effect, the crisis suggested that governance mechanisms designed to restrain leadership could be overwhelmed by capital-market pressure. That observation makes one former researcher’s comment particularly revealing.

Carroll Wainwright told the magazine, “he sets up structures that, on paper, constrain him in the future. But then, when the future comes, and it comes time to be constrained, he does away with whatever the structure was.”

That quote captures the central criticism now hanging over Altman’s leadership style: that formal safeguards exist, but may not be durable when they conflict with strategic ambition.

As OpenAI’s models are increasingly embedded in public institutions, national security workflows, immigration systems, and enterprise operations, policymakers are expected to see the expose’ beyond corporate drama.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here