Anthropic has sued the US government, specifically the Trump administration, the Department of Defense/Pentagon, and related agencies after being designated a “supply-chain risk to national security.”
With Anthropic filing two federal lawsuits: One in the US District Court for the Northern District of California. Another in the US Court of Appeals for the DC Circuit.
The lawsuits challenge the Pentagon’s decision following escalating disputes to label Anthropic a supply-chain risk under statutes like the Federal Acquisition Supply Chain Security Act (FASCSA) and related authorities. This designation—typically applied to foreign adversaries or high-risk entities—effectively bans or severely restricts federal agencies, military contractors, and defense-related partners from using Anthropic’s AI technology.
The conflict stems from months of negotiations over how the US military could use Anthropic’s AI: Anthropic insisted on maintaining “red lines” in its acceptable use policy, prohibiting Claude from being used for: Mass domestic surveillance of Americans.
Fully autonomous weapons; systems that select and engage targets without human intervention. The Pentagon (under Defense Secretary Pete Hegseth) and the Trump administration demanded broader access for “all lawful uses,” including unrestricted military applications.
When Anthropic refused to remove these safeguards, the administration escalated: President Trump directed all federal agencies to immediately cease using Anthropic’s technology. The Pentagon imposed the supply-chain risk designation, giving a transition period but cutting off defense-related business.
Anthropic describes this as an “unlawful campaign of retaliation” that violates its First Amendment rights exceeds statutory authority, and circumvents proper contract processes. The company argues it’s being punished for its views on responsible AI development, which could set a dangerous precedent for other tech firms.
This is reportedly unprecedented for a US-based company such designations are rare and usually target foreign entities. It threatens Anthropic’s government contracts, revenue including specialized “Claude Gov” versions used on classified networks, and broader business reputation.
The case could influence future AI-military relations, government procurement rules, and debates over AI safety guardrails vs. national security needs. Anthropic’s CEO Dario Amodei had previously signaled intent to challenge any such designation in court.
This appears to be an ongoing, high-stakes legal battle with significant ramifications for the AI industry and US defense tech policy. Anthropic’s “red lines” refer to the two firm, non-negotiable restrictions or guardrails that the company has placed on the use of its AI models, particularly Claude, especially in high-stakes contexts like military or government applications.
These red lines became central to the company’s high-profile dispute with the US Department of Defense in early 2026, leading to contract breakdowns, federal bans on Anthropic’s technology, a supply-chain risk designation, and ultimately Anthropic’s lawsuits against the government filed on March 9, 2026.
The Two Core Red Lines
No mass domestic surveillance of Americans
Anthropic prohibits using Claude for large-scale, AI-powered monitoring or analysis of US citizens’ data without appropriate safeguards. This includes scenarios where AI could aggregate and process vast amounts of commercial or public data; geolocation, web browsing, communications, associations to create comprehensive profiles at scale.
The company views this as incompatible with democratic values and fundamental rights. CEO Dario Amodei has argued that current laws lag behind AI’s capabilities, allowing potential exploitation of loopholes in bulk data collection without warrants. Anthropic supports lawful foreign intelligence or counterintelligence but draws a hard line at domestic mass surveillance, which it sees as a serious risk to civil liberties.
No fully autonomous weapons or lethal autonomous systems without human oversight. Anthropic refuses to allow Claude to power or control weapons systems that can select, target, and engage including kill without meaningful human intervention—”human in the loop” for lethal decisions.
Today’s frontier AI models are not reliable or safe enough for such high-risk, life-or-death applications. Allowing this could endanger warfighters, civilians, and set dangerous precedents. Amodei has emphasized that crossing this line contradicts American values and responsible AI development.
These restrictions are embedded in Anthropic’s Acceptable Use Policy (AUP) and contractual terms, which prohibit harmful or catastrophic misuse, including certain weapons development, surveillance without consent, and other high-risk applications. The company has publicly stated it will not knowingly provide technology that puts people at undue risk.
The Pentagon sought broader access to Claude for “all lawful purposes,” including potential military and intelligence uses, without these explicit exceptions. Anthropic insisted on carving out the red lines in any contract. When negotiations failed: The administration via President Trump and Defense Secretary Pete Hegseth ordered federal agencies to cease using Anthropic tech.
The Pentagon labeled Anthropic a “supply chain risk to national security” — a rare step usually reserved for foreign adversaries.
Rivals like OpenAI negotiated deals with similar-sounding red lines but accepted “all lawful use” language with added technical safeguards.
Anthropic framed this as unlawful retaliation against its protected speech on AI ethics and safety, leading to the March 2026 lawsuits challenging the designation and bans. In short, Anthropic’s red lines represent a deliberate stance on responsible AI deployment supporting national defense and lawful uses while refusing applications that could enable mass privacy erosion or unchecked lethal autonomy.
This position has positioned the company as a vocal advocate for AI safety boundaries, even at significant business cost.








