Home Community Insights OpenAI’s Pentagon Deal Triggers User Backlash, Sees Uninstalls in ChatGPT as Claude Climbs App Store Chart

OpenAI’s Pentagon Deal Triggers User Backlash, Sees Uninstalls in ChatGPT as Claude Climbs App Store Chart

OpenAI’s Pentagon Deal Triggers User Backlash, Sees Uninstalls in ChatGPT as Claude Climbs App Store Chart

OpenAI is facing mounting backlash following its recent agreement with the U.S. Department of Defense to deploy advanced AI systems in classified environments, a move that has reportedly triggered a wave of ChatGPT uninstalls and fueled a surge in downloads of rival chatbot Claude.

Recall that earlier this month, the company reached an agreement with the Pentagon to provide AI systems for classified use cases. OpenAI had requested that such access be extended to all AI companies, but the announcement quickly sparked public concern. Critics questioned how artificial intelligence could be deployed in military contexts and expressed unease over the growing influence of private technology firms in government defense operations.

According to a report by TechCrunch, ChatGPT uninstalls surged by 295% over the weekend following the news. Meanwhile, downloads of Claude developed by Anthropic rose 51% during the same period.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Data from analytics firm Sensor Tower showed that 1-star reviews for ChatGPT spiked 775% on Saturday and then increased 100% day-over-day on Sunday. In contrast, five-star reviews dropped by 50%. At the same time, Claude climbed to the top of the U.S. Apple App Store rankings.

In a strategic move to attract dissatisfied users, Anthropic rolled out new features for Claude’s free tier, including context recall across conversations and a tool that allows users to import chat histories from competing bots like ChatGPT.

Speaking on OpenAI’s deal with the Pentagon, Debra Andrews, founder of Marketri, framed the situation as a matter of brand trust rather than technological competition.

In a post on LinkedIn, she wrote,

“ChatGPT had everything a brand could want. First-mover advantage. Hundreds of millions of users. A mission that made people feel good about using it. And then, one decision at a time, they gave it all away. A pivot from nonprofit to profit. Ads after promising there wouldn’t be ads. A rushed Pentagon deal that even Sam Altman admitted looked ‘opportunistic and sloppy.’

“Meanwhile, Claude quietly did something radical: it said no. No to ads. No to defense contracts that violated its own terms. And last weekend, it overtook ChatGPT as the most downloaded app in the U.S. App Store. This is not a technology story. It is a trust story. And it has lessons for every brand that thinks being first means being safe.”

OpenAI CEO Sam Altman acknowledged the criticism on Monday, stating that the company “shouldn’t have rushed” its defense agreement. He shared what he described as a repost of an internal memo on X, outlining planned revisions to the contract to clarify OpenAI’s principles regarding surveillance and safety.

According to Altman, the revised language specifies that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” The memo further noted that “the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman also stated that the Defense Department affirmed that OpenAI’s tools would not be used by intelligence agencies such as the NSA.

“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” he said, adding that the company would collaborate with the Pentagon to implement technical safeguards.

The agreement followed reports that negotiations between Anthropic and the Defense Department had broken down. Although Altman reportedly told employees that OpenAI shared the same “red lines” as Anthropic, government officials had previously criticized Anthropic for what they described as excessive caution around AI safety.

Anthropic CEO Dario Amodei reiterated his company’s position, stating that it “cannot in good conscience” allow the Department of Defense to use its models in all lawful use cases without limitation. He added that the agency’s threats would not alter Anthropic’s stance.

Defense Secretary Pete Hegseth has reportedly threatened to label Anthropic a “supply chain risk” or invoke the Defense Production Act to compel compliance, though discussions between the parties remain ongoing.

Outlook

The unfolding controversy underscores a broader debate about the role of artificial intelligence in military and surveillance applications. While OpenAI has moved to clarify restrictions and reinforce safeguards, the episode reveals how quickly public sentiment can shift when trust is perceived to be compromised.

For OpenAI, the immediate challenge will be stabilizing user confidence while maintaining strategic government partnerships. More broadly, the incident signals that in the AI race, technological capability alone may not determine market leadership. As public awareness around AI governance deepens, companies may increasingly find that transparency, consistency, and ethical clarity are as critical as innovation itself.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here