A federal judge in California suggested on Tuesday that the Pentagon’s unprecedented blacklisting of artificial intelligence firm Anthropic may have been motivated by the company’s public stance on AI safety rather than genuine national security concerns.
The case stems from Anthropic’s lawsuit challenging the Department of Defense’s designation of the company as a “national security supply-chain risk,” a label that effectively blocks it from certain military contracts. The company contends the move violates its constitutional rights, including free speech and due process, after refusing to let its Claude AI software be used for domestic surveillance or autonomous weapons, citing reliability and ethical concerns.
U.S. District Judge Rita Lin, a Biden appointee, said during the hearing in San Francisco that the designation “looks like an attempt to cripple Anthropic” and suggested it may be punitive.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“It looks like DOW is punishing Anthropic for trying to bring public scrutiny to this contract dispute,” she said, referring to the Department of War, President Donald Trump’s rebranding of the Defense Department.
The Anthropic lawsuit, filed on March 9, argues that the Pentagon overstepped its authority by imposing the supply-chain risk label without giving the company an opportunity to respond, in violation of the Fifth Amendment. Anthropic also claims that the move constitutes retaliation for speaking out on AI safety, implicating First Amendment protections.
During the hearing, the company’s attorney, Michael Mongan, described the designation as a distorted use of federal procurement law.
“The logical implication of their position here is they can point to their frustrations in a contract negotiation, the stubbornness of the vendor, and say, ‘because you’re working in an area that touches national security, we’re going to tell the world that we think you might come around in the future and sabotage our systems,’” Mongan said.
The Department of Justice, defending the Pentagon, argued that the designation was a precautionary measure. DOJ attorney Eric Hamilton said the company’s reluctance to allow military use of Claude created an unacceptable operational risk.
“What happens if Anthropic, through an update, installs a kill switch or installs functionality that allows it to change how the software is functioning when our warfighters need it most? That is an unacceptable risk,” he said.
Anthropic has warned that the designation could cost the company billions of dollars in lost contracts and reputational damage. The label is notable as the first public use of this obscure procurement statute against a U.S.-based company. A separate lawsuit in Washington, D.C., challenges another Pentagon supply-chain designation that could exclude Anthropic from civilian government contracts.
Judge Lin said she would issue a written ruling on the request for a temporary block of the designation in the coming days. The case is just one among others, and highlights growing tensions between AI developers seeking to assert ethical boundaries and the military’s insistence on secure, controllable systems, with potential implications for AI companies nationwide.
Anthropic executives maintain that AI models remain insufficiently reliable for deployment in weapons systems and domestic surveillance. Their stance has sparked a wider debate over the ethical and strategic use of AI in national defense, and whether government agencies can wield procurement law to enforce compliance.
The lack of established AI regulation has made the case attractive as it is expected to set a precedent for how AI firms interact with the U.S. military.



