Home Latest Insights | News Sam Altman Wonders if Social Media is Real Anymore — or Just Bots Talking to Bots

Sam Altman Wonders if Social Media is Real Anymore — or Just Bots Talking to Bots

Sam Altman Wonders if Social Media is Real Anymore — or Just Bots Talking to Bots

Sam Altman, a prominent figure in Silicon Valley, OpenAI chief, X enthusiast, and a major shareholder in Reddit, admitted on Monday that he’s no longer sure if anything he reads on social platforms is written by actual humans.

The epiphany came while he was scrolling through r/Claudecode, a subreddit dedicated to Anthropic’s Claude Code, per Tech Crunch. The forum has recently been flooded with posts by self-proclaimed users claiming they had abandoned Claude Code for OpenAI’s rival service, Codex, which launched in May. The surge was so pronounced that one user joked: “Is it possible to switch to Codex without posting a topic on Reddit?”

For Altman, the posts raised a deeper question: are these genuine testimonials or orchestrated noise? Writing on X, he confessed: “I have had the strangest experience reading this: I assume it’s all fake/bots, even though in this case I know Codex growth is really strong and the trend here is real.”

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

Altman then began dissecting his own thought process live for his followers. He pointed to a mix of phenomena: humans adopting quirks of LLM-generated speech, “extremely online” communities mimicking one another in cycles of hype and backlash, platforms optimizing for engagement at all costs, creator monetization incentives warping discussion, and the ever-present risk of astroturfing—when companies or their proxies secretly flood platforms with posts to simulate grassroots support.

“And a bunch more (including probably some bots),” he added.

The irony, of course, is unavoidable. LLMs like OpenAI’s GPT models were designed to mimic human communication and were trained on Reddit posts, among other sources. Altman himself sat on Reddit’s board until 2022 and was disclosed as a large shareholder when the company went public last year. In effect, the human-sounding bots Altman now complains about were born, in part, from the very ecosystems he helped build.

The dynamic he describes isn’t entirely new. Online fandoms have long been prone to herd behavior, where communities can swing rapidly from praise to hostility. Altman’s own company saw this after the rollout of GPT 5.0. Instead of celebration, Reddit and X were filled with complaints about everything from GPT’s “personality” to its tendency to burn through credits without finishing tasks. The backlash forced Altman into a Reddit Ask Me Anything session on r/GPT, where he admitted to rollout issues and promised improvements. Yet the subreddit has never fully regained its prior enthusiasm, and to this day, posts critical of GPT 5.0 continue to dominate.

Whether those posts are human-authored or machine-generated is beside the point, Altman suggested—the experience of online discourse now feels fake.

“The net effect is somehow AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago,” he wrote.

The broader numbers lend weight to his concern. According to data security firm Imperva, more than half of all internet traffic in 2024 was generated by non-humans, largely bots and LLM-driven systems. On X, Elon Musk’s in-house chatbot Grok estimates there are now “hundreds of millions of bots” roaming the platform.

For critics, Altman’s lament is ironic not just because OpenAI popularized the tools that accelerated the bot flood, but also because it may foreshadow his next move. Reports earlier this year suggested OpenAI is quietly exploring its own social media platform, a potential rival to X and Facebook. Altman’s musings about the “fakeness” of current platforms, some argue, may double as early-stage marketing for whatever project is—or isn’t—on the drawing board.

That raises an uncomfortable question: if OpenAI did launch a social network, could it really be bot-free? The evidence suggests otherwise. Researchers at the University of Amsterdam once built a social network populated entirely by bots. Within weeks, the bots had splintered into cliques, formed echo chambers, and began reinforcing each other’s views—behavior indistinguishable from human communities online.

A Longer History of Manipulation

What Altman is observing has deep historical roots. Well before the rise of generative AI, bots were deployed to manipulate online debates. During the 2016 U.S. presidential election, foreign-linked bot networks amplified divisive narratives on Facebook and Twitter, reaching millions of Americans with disinformation. Similarly, stock market “pump-and-dump” schemes in the late 2010s often relied on Twitter bots and fake Reddit accounts to create the illusion of investor hype before insiders cashed out.

Even seemingly trivial online disputes were often shaped by automated activity. For example, investigations into trending hashtags—from sports fandoms to political protests—have repeatedly found that coordinated bot activity helped push topics to the top of feeds, giving fringe issues outsized visibility.

The difference now is scale and sophistication. Whereas old bot armies relied on crude scripts that posted repetitive or clunky messages, today’s LLM-driven bots can write in fluent, natural language, mimicking human cadence so closely that even tech leaders like Altman struggle to tell the difference. With AI, the cost of running massive astroturfing campaigns has plummeted, while the reach has multiplied. Experts warn that the coming years will see an explosion of bot manipulation, far more seamless and pervasive than anything the internet has previously experienced.

Altman’s reflection, then, is not just about the here and now—it’s a recognition that social media has always been vulnerable to manipulation, but that generative AI has brought us to a tipping point where the line between organic and artificial discourse is all but erased.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here