A federal appeals court in Washington, D.C., on Wednesday denied Anthropic’s emergency request to temporarily halt the Department of Defense’s designation of the AI company as a supply chain risk, dealing a setback to the startup as its legal battle with the Trump administration continues.
The U.S. Court of Appeals for the D.C. Circuit ruled that the “equitable balance” favors the government. In pointed language, the panel stated: “On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.”
For that reason, the court denied the motion for a stay pending full review on the merits.
The decision creates a split outcome across courts. In a separate but related case last month, a federal judge in San Francisco granted Anthropic a preliminary injunction blocking the administration from enforcing a broader ban on the use of its Claude model across federal agencies. As a result, Anthropic remains excluded from direct DOD contracts and work involving defense contractors on Pentagon projects, but it can still serve other government agencies and non-defense customers.
Defense contractors are barred from using Claude in their DOD-related work but may continue using it for commercial or non-military projects.
The Pentagon labeled Anthropic a supply chain risk in early March 2026 under authorities including 10 U.S.C. § 3252 and provisions of the Federal Acquisition Supply Chain Security Act. The designation, historically applied mainly to foreign adversaries, requires contractors to certify they are not using Anthropic’s Claude models in military-related activities.
Defense Secretary Pete Hegseth first signaled the move publicly on X in late February, and President Trump followed with a Truth Social post directing agencies to immediately cease use of the technology, with a six-month phase-out period for certain systems.
Anthropic fired back with lawsuits filed in March, arguing the designation was retaliatory, unconstitutional, arbitrary and capricious, and procedurally flawed. The company contends the action punishes it for refusing to grant the Pentagon unrestricted access to Claude for “all lawful purposes.”
Anthropic had sought two narrow exceptions during contract talks: prohibiting use in fully autonomous lethal weapons and mass domestic surveillance of American citizens. Negotiations broke down after Anthropic held firm on these ethical red lines, despite having signed a $200 million contract with the Pentagon in July 2025 and previously deploying models on classified networks.
The appeals court acknowledged that Anthropic “will likely suffer some degree of irreparable harm” without a stay but characterized the harm as “primarily financial in nature.” It rejected the company’s First Amendment claims, noting that Anthropic failed to show its speech had been chilled during the litigation. Still, the court ordered a “substantial expedition” of the case given the stakes.
An Anthropic spokesperson said in a statement that the company is grateful the court recognized these issues need to be resolved quickly, and expressed confidence that “the courts will ultimately agree that these supply chain designations were unlawful.”
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the statement said.
Acting U.S. Attorney General Todd Blanche hailed the ruling on X as “a resounding victory for military readiness.” He wrote: “Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”
The dispute highlights deepening tensions between frontier AI labs and the national security establishment over control, safety guardrails, and the military applications of rapidly advancing technology. Anthropic, led by CEO Dario Amodei, has positioned itself as a leader in responsible AI development, emphasizing constitutional AI principles and caution around high-risk uses.
Critics inside the administration viewed the company’s stance as overly restrictive or even arrogant, especially amid ongoing military conflicts where AI tools could provide strategic advantages.
The case also raises broader questions about government leverage over private AI developers. The supply chain risk tool, designed to protect against vulnerabilities in critical supply chains, had rarely—if ever—been wielded against a major domestic technology firm before Anthropic.
Legal experts and nearly 150 former judges filed amicus briefs supporting Anthropic, arguing the designation bypassed required procedures and risked chilling innovation.
For now, the split rulings mean Anthropic faces real but contained restrictions: lost DOD revenue opportunities and potential reputational damage in defense circles, while retaining access to much of the federal government and the vast commercial market. The company has warned internally that prolonged blacklisting could cost billions in 2026 revenue.
The litigation is expected to move quickly. With the D.C. appeals court calling for expedition and the underlying merits still to be decided, the standoff could shape how other AI companies negotiate with the Pentagon.










