The confrontation between the administration of Donald Trump and artificial intelligence firm Anthropic intensified on Thursday after the Pentagon formally designated the company a supply-chain risk — an extraordinary step that could force defense contractors to stop using its flagship AI system, Claude, in military projects.
Anthropic chief executive Dario Amodei said the company would challenge the decision in court, describing the move as legally flawed and leaving the firm with little choice but to pursue litigation.
The lawsuit marks what Amodei framed as a last resort after days of negotiations with the U.S. Department of Defense failed to produce a compromise over how the military can deploy advanced AI systems. But the legal challenge is also expected to escalate the already bitter standoff between the Pentagon and one of the fastest-growing companies in the U.S. AI industry.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“This action is not legally sound,” Amodei said in a statement. “We see no choice but to challenge it in court.”
The Pentagon said it had formally informed Anthropic that its technology presents a supply-chain risk to military systems, an assessment that took effect immediately and appeared to close the door on further negotiations. The designation has major implications because Claude is already embedded in a range of national security software tools used by contractors and government agencies.
A dispute over who controls AI in warfare
At the heart of the dispute is a clash over how much influence private technology companies should have over the use of their systems in military operations.
Anthropic had insisted on retaining safeguards limiting the use of its AI in two areas: mass domestic surveillance and fully autonomous weapons. The company argues that those restrictions are consistent with widely discussed global AI safety principles. The Pentagon rejected that position, arguing that a contractor cannot impose policy constraints on how the military conducts lawful operations.
In its statement, the Defense Department said the issue boiled down to a single principle: the armed forces must be able to deploy technology across all lawful missions without interference from vendors.
“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the Pentagon said.
Amodei countered that the limits Anthropic sought were not operational constraints on battlefield decisions but high-level safeguards designed to prevent the technology from being used in controversial or ethically fraught applications.
He also said the company had been in “productive conversations” with defense officials about ways to continue supporting military users while addressing those concerns.
Trump, however, ordered the Defense Department to phase out Anthropic technology within six months, signaling that the administration had already moved toward a confrontation rather than compromise.
Legal battle could redefine procurement rules
Anthropic’s lawsuit is expected to test how far the government can stretch supply-chain risk authorities — a set of rules originally designed to block technology tied to foreign adversaries.
Federal law typically defines such risks as the possibility that companies controlled by hostile governments could sabotage or infiltrate U.S. systems.
Critics say applying those authorities to a domestic company represents a significant reinterpretation of the rules.
Senator Kirsten Gillibrand warned the move could undermine the U.S. technology sector.
“This reckless action is shortsighted, self-destructive, and a gift to our adversaries,” she said.
A coalition of former national security officials echoed that concern in a letter to lawmakers, arguing that the authority was intended to protect the United States from companies tied to governments such as China or Russia — not American firms operating under U.S. law.
Among the signatories was Michael Hayden, who warned that using the designation in this way could set a precedent that deters private companies from cooperating with the government on advanced technologies.
Policy analysts say the case could become a landmark legal fight over the balance of power between government procurement authorities and private developers of powerful AI systems.
Contractors begin adjusting
Defense contractors have already started preparing for the possibility that Anthropic technology may be removed from some military programs.
Lockheed Martin said it will comply with the administration’s directive and look for alternative providers of large language models, although it emphasized that its programs are not dependent on a single AI vendor.
Meanwhile, Microsoft said its lawyers believe the Pentagon’s designation applies only to Anthropic technology used directly within defense contracts. That interpretation would allow continued collaboration with Anthropic on commercial projects.
The uncertainty surrounding the scope of the rule could create confusion across the defense technology ecosystem, where large language models are increasingly embedded in analytics platforms, cybersecurity tools, and decision-support systems.
The dispute has also sharpened competition between Anthropic and its main rival, OpenAI.
Anthropic was founded in 2021 by former OpenAI leaders, including Amodei, after internal disagreements about the pace and safety of AI development.
Hours after the Pentagon first threatened punitive measures last week, OpenAI announced an agreement to deploy ChatGPT in classified military environments. OpenAI chief executive Sam Altman later acknowledged that the timing of the announcement created the impression that the company was capitalizing on its rival’s troubles.
He said the agreement had been rushed and “looked opportunistic and sloppy.”
Public support grows for Anthropic
While the Pentagon’s decision threatens a major stream of government revenue, Anthropic has seen a surge of interest from consumers and developers who support its stance on AI safety.
The company said more than one million people signed up daily for Claude during the past week, pushing the chatbot to the top of the Apple App Store rankings in more than 20 countries.
That surge suggests the dispute is resonating far beyond Washington, highlighting a growing public debate about whether the creators of advanced AI systems should be able to set ethical boundaries for how their technology is used.



