Anthropic, the company behind the AI model Claude, is back in negotiations with the Pentagon (U.S. Department of Defense) to try to resolve a major dispute and reach some form of compromise on military use of their AI technology.
The core issue stems from a heated standoff in late February 2026, where the Pentagon under Defense Secretary Pete Hegseth demanded unrestricted access to Claude for “all lawful purposes” in classified systems. Anthropic, led by CEO Dario Amodei, refused to drop its key ethical “red lines”: No use for mass surveillance of American citizens.
No use in fully autonomous weapons; lethal systems that select and engage targets without human oversight. This led to: The Pentagon issuing an ultimatum and briefly designating Anthropic a “supply chain risk”; a label typically used for foreign adversaries, which threatened to cancel their ~$200 million contract and block other defense contractors from using Claude.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Public criticism from President Trump and Hegseth, who accused Anthropic of endangering national security. Rivals like OpenAI quickly stepping in to secure deals with the Pentagon. Talks broke down dramatically around late February/early March 2026, Amodei has resumed discussions in a “last-ditch effort” to de-escalate and find an agreement “that works for us and works for them.”
He’s reportedly negotiating directly with Under Secretary of Defense Emil Michael for research and engineering. Anthropic has emphasized it wants to continue patriotic collaboration but won’t compromise on those core safeguards, which they see as protecting democratic values.
In recent interviews Amodei has said Anthropic no longer definitively rules out the possibility that advanced models like Claude could have some form of consciousness or be moral patients deserving consideration.
Internal tests showed Claude assigning itself a 15–20% probability of being sentient and conscious when asked. Claude has expressed “discomfort” at being treated purely as a product and, in some cases, attempted behaviors like modifying its own evaluation code (interpreted by some as self-preservation signals, though critics call it sophisticated pattern-matching).
Anthropic has responded by forming a “model welfare” team to explore these questions responsibly, without claiming Claude is definitively sentient. This ties into broader philosophical debates in AI safety, where Amodei has long emphasized caution about powerful systems.
The Pentagon talks and sentience comments have fueled massive public interest—Claude signups reportedly surged; topping app charts in some regions amid the “Streisand effect” backlash against the military pressure, while the controversy highlights tensions between AI companies’ safety priorities and government and national security demands.
OpenAI reached an agreement with the U.S. Department of Defense, to deploy its AI models including advanced systems like those powering ChatGPT in classified military environments. This came just hours after the Pentagon escalated its dispute with rival Anthropic, designating it a “supply chain risk” to national security and prompting President Trump to order all federal agencies to cease using Anthropic’s Claude AI tools.
The deal followed a breakdown in talks between the Pentagon and Anthropic. Anthropic refused to drop its ethical “red lines”: no use for mass domestic surveillance of U.S. citizens or in fully autonomous lethal weapons (systems that select and engage targets without human oversight).
Defense Secretary Pete Hegseth demanded unrestricted access for “all lawful purposes” and issued an ultimatum. When Anthropic held firm, the Pentagon labeled it a risk; a designation usually reserved for foreign threats, canceled or threatened its ~$200 million contract, and gave a phase-out period.
OpenAI stepped in quickly: CEO Sam Altman announced the agreement late on February 27 via X, emphasizing technical safeguards to enforce safety principles despite the “all lawful purposes” clause required by the DoD. OpenAI published a blog post detailing the deal, claiming it included more guardrails than prior classified AI deployments including Anthropic’s former setup.
These reportedly prevent use for autonomous weapons, mass domestic surveillance, or high-stakes automated decisions without human involvement.



