U.S. Defense Secretary Pete Hegseth designated Anthropic; the company behind the AI model Claude as a “supply chain risk to national security.”
This is a highly unusual step—typically reserved for foreign adversaries or entities with ties to threats like China—never before applied to a U.S.-based company in this context. This followed an escalating dispute between the Pentagon and Anthropic over the military’s use of Claude.
The Pentagon demanded unrestricted access for “any lawful purpose,” including scenarios involving mass domestic surveillance of U.S. citizens or fully autonomous lethal weapons; systems that can select and engage targets without human intervention.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Anthropic refused to remove its built-in safeguards on these specific uses, citing risks to democratic values, civil liberties, and ethical concerns. Anthropic had previously secured a up to $200 million contract with the Department of Defense to provide frontier AI capabilities for national security applications like intelligence analysis, modeling, simulation, cyber operations, and operational planning.
It was one of the first (and only) frontier AI models deployed on classified U.S. government networks. After negotiations broke down, President Trump directed all federal agencies to cease using Anthropic’s technology with a phase-out period.
Hegseth then announced the designation via X, stating that effective immediately, no contractor, supplier, or partner doing business with the U.S. military could conduct commercial activity with Anthropic. This effectively blacklists the company from the vast defense ecosystem.
Shortly after, rival OpenAI announced a deal to provide its models to the Pentagon for classified use. Anthropic’s CEO Dario Amodei and the company responded strongly, calling the move “legally unsound,” contradictory; one threat labels them a risk, while others imply Claude is essential, and unprecedented.
They vowed to challenge the designation in court, arguing it sets a dangerous precedent for any U.S. company negotiating with the government. Anthropic emphasized its prior cooperation, including being the first to deploy in classified environments and national labs.
The fallout highlights tensions in the AI-national security space: On one side, the government insists private companies cannot impose limits on lawful military and intelligence uses. On the other, Anthropic (and some observers) sees this as government overreach, potentially enabling surveillance overreach or “killer robots” without oversight.
Consumer interest in Claude ironically spiked (it hit #1 on app stores in some reports), while enterprises tied to government contracts began purging it. This dispute tests the balance of power between frontier AI firms and the state in an era where AI increasingly shapes warfare, intelligence, and society.
OpenAI signed a deal with the Pentagon. This came mere hours after the government blacklisted rival Anthropic over similar negotiations failing. The agreement allows OpenAI’s advanced AI models likely including successors to GPT series to be deployed on the U.S. military’s classified networks for national security applications, such as intelligence analysis, operational planning, cyber operations, and modeling/simulation—similar to the prior Anthropic contract.
Reports indicate agreements with major AI labs including OpenAI, Anthropic previously, and others like Google have been in the range of up to $200 million each over recent years. The exact value for OpenAI’s new deal hasn’t been publicly disclosed but aligns with this scale for classified AI access.
The Pentagon can use the AI systems for all lawful purposes, consistent with applicable law, operational requirements, and established safety and oversight protocols. No use for mass domestic surveillance of U.S. citizens. No independent direction of autonomous weapons systems; where law, regulation, or DoD policy requires human control; human responsibility for use of force remains mandatory.
No involvement in other high-stakes automated decisions. Cloud-only deployment; no edge devices that could enable offline autonomous lethal use. OpenAI retains and runs its own safety stack (guardrails and controls), with no provision of “guardrails-off” or non-safety-trained models.
Cleared OpenAI personnel are involved in oversight. “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.”
OpenAI described this as a “multi-layered” approach with stronger protections than prior agreements including Anthropic’s original one, combining technical controls, contractual clauses, cloud restrictions, and existing U.S. law. They requested the same terms be extended to all AI companies and urged de-escalation in the Anthropic dispute.
The Pentagon had demanded unrestricted “all lawful purposes” access without company-imposed limits on sensitive uses. Anthropic refused to drop its hard red lines on mass domestic surveillance and fully autonomous weapons, leading to the designation as a “supply chain risk,”.
OpenAI negotiated a compromise: formally agreeing to the broad “lawful purposes” clause while enforcing red lines via its retained technical and legal controls. Critics question whether these safeguards are as ironclad in practice, with some viewing OpenAI’s quicker deal as more permissive.
Altman admitted the process was “rushed” with poor optics but emphasized mutual respect for safety. This positions OpenAI as the primary frontier AI provider for classified DoD environments following the Anthropic fallout. The deal highlights ongoing tensions between AI companies’ ethical boundaries and government demands for unrestricted national security access.



