Home Latest Insights | News Anthropic’s Claude Surges to No. 2 on U.S. App Store After Trump Moves to Block Government Use

Anthropic’s Claude Surges to No. 2 on U.S. App Store After Trump Moves to Block Government Use

Anthropic’s Claude Surges to No. 2 on U.S. App Store After Trump Moves to Block Government Use

Anthropic saw its Claude artificial intelligence assistant climb to No. 2 among free U.S. apps on Apple’s App Store late Friday, just hours after President Donald Trump directed federal agencies to stop working with the company and the Pentagon moved to classify it as a supply-chain risk.

The sudden jump in consumer downloads followed a high-profile clash between Anthropic and the administration over the permissible use of AI in defense and surveillance contexts. The matter has thrust the startup into national headlines and appears to have amplified public awareness of its stated guardrails against mass domestic surveillance and fully autonomous weapons.

On Truth Social, Trump accused the company of attempting to “STRONG-ARM the Department of War,” the administration’s renamed Department of Defense, and said he was ordering a six-month phase-out of Anthropic’s technology across federal agencies. He warned that if the company failed to cooperate with the transition, he would use “the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Defense Secretary Pete Hegseth said he had requested Anthropic be labeled a national security supply-chain risk, a designation that could prevent U.S. defense contractors from using its AI tools in Pentagon-related work.

Anthropic CEO Dario Amodei responded that the company provides “substantial value” to the armed forces and expressed hope that the Department would reconsider. In a separate statement, the company said it would challenge any risk designation in court, arguing such a move would be legally unsound and set a dangerous precedent for American firms negotiating contract terms with the government.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said.

The controversy coincided with a sharp rise in Claude’s consumer visibility. On Saturday, OpenAI’s ChatGPT remained No. 1 on Apple’s U.S. free app rankings, while Google’s Gemini held the No. 3 position.

Claude’s ascent is notable given its historical position behind consumer-facing rivals. As recently as Jan. 30, Claude ranked No. 131 in the U.S., according to Sensor Tower data. It moved into the top 20 and top 50 intermittently through February before reaching No. 2 following Friday’s developments.

The spike suggests a potential “headline effect,” where regulatory scrutiny and political confrontation translate into consumer downloads. Public positioning around AI ethics—particularly opposition to autonomous weapons and domestic surveillance—has resonated with segments of users concerned about the rapid militarization of artificial intelligence.

High-profile social media attention added to the visibility. Pop singer Katy Perry posted a screenshot of Anthropic’s Pro subscription with a heart overlaid shortly after the administration’s announcement.

A widening divide over AI guardrails

The dispute centers on whether private AI companies can impose contractual restrictions on how their models are used by the military. Anthropic has sought explicit guardrails against mass domestic surveillance and fully autonomous weapons systems.

Pentagon officials have argued that U.S. law, not corporate terms of service, governs battlefield deployment. Hegseth said the Defense Department must retain flexibility in how it uses AI in national defense.

The administration’s move establishes a precedent: that federal authorities may sideline AI suppliers whose policy positions are viewed as constraining military autonomy. Legal analysts note that a formal supply-chain risk designation could bar tens of thousands of contractors from incorporating Anthropic’s models into defense-related workflows.

Franklin Turner, an attorney specializing in government contracts, described blacklisting Anthropic as “the contractual equivalent of nuclear war,” given the cascading implications for government and private-sector business.

Anthropic’s AI tools have already been used within the intelligence community and armed services, and the company was among the first to handle classified workloads via cloud provider Amazon.

Along Comes OpenAI’s Deal with the Defense Department

The administration’s action unfolded alongside an announcement from OpenAI that it had reached an agreement with the Defense Department to deploy its models within classified networks. OpenAI CEO Sam Altman said on X that the Pentagon’s principles for human responsibility over weapon systems and a prohibition on mass U.S. surveillance were incorporated into the contract.

It was not immediately clear how those contractual terms compare to Anthropic’s proposed guardrails.

The juxtaposition highlights a broader competitive race among major AI labs for defense contracts. The Pentagon has signed agreements worth up to $200 million each with leading firms, including Anthropic, OpenAI, and Google.

Anthropic, founded in 2021 by former OpenAI researchers, has gained traction over the past year as a provider of coding-focused and enterprise AI models. Meanwhile, OpenAI’s ChatGPT now reports more than 900 million weekly users globally. OpenAI has also expanded enterprise distribution through partnerships with consulting firms such as Accenture and Capgemini.

Anthropic’s financial backers include Google and Amazon, underscoring the interconnected nature of the AI ecosystem even as companies compete for government and commercial dominance.

National security and civil liberties debate

The standoff revives longstanding tensions between Silicon Valley and the Pentagon. In 2018, employees at Google protested the company’s involvement in Project Maven, a Defense Department effort to use AI to analyze drone footage. Since then, relationships have fluctuated between resistance and rapprochement, particularly as geopolitical competition with China elevated AI to a national security priority.

Former defense AI officials have warned that fewer guardrails could heighten concerns about due process, civilian casualties, and collateral damage in increasingly automated conflicts. Wars in Ukraine and Gaza have showcased expanded use of AI-enabled systems, intensifying debate about so-called “killer robots.”

Anthropic has argued that U.S. law has not fully caught up with AI’s capabilities. For example, current statutes do not necessarily prohibit aggregation of benign data to infer sensitive personal information at scale.

The White House’s intervention reframes the debate around sovereign authority: whether elected officials or private companies define the operational boundaries of military AI.

If the supply-chain risk designation proceeds, Anthropic could face immediate revenue losses from federal contracts and indirect impacts across defense-adjacent industries. The designation may also influence procurement decisions in allied countries.

At the same time, the controversy appears to have elevated Anthropic’s consumer profile. The company’s rapid climb in app rankings suggests that public opposition to certain defense uses of AI can translate into brand differentiation in a crowded market.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here