Home Latest Insights | News Google and OpenAI Researchers Back Anthropic in U.S. Court Fight Over Pentagon Blacklist

Google and OpenAI Researchers Back Anthropic in U.S. Court Fight Over Pentagon Blacklist

Google and OpenAI Researchers Back Anthropic in U.S. Court Fight Over Pentagon Blacklist

A group of engineers and researchers from two of the world’s most influential artificial intelligence companies has stepped into the legal battle between Anthropic and the U.S. government, filing a court brief that supports the AI firm’s challenge to a national security designation imposed by the Pentagon.

The filing, submitted as an amicus curiae brief — commonly referred to as a “friend of the court” submission — backs Anthropic’s effort to overturn a government decision branding the company a “supply chain risk to national security” and restricting major firms from working with it.

Anthropic on Monday launched two lawsuits contesting the authority of the U.S. Department of Defense to impose the designation, arguing the move could devastate its business and damage its standing across the fast-growing artificial intelligence industry.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

The amicus brief supporting the company carries the signatures of 37 professionals described as engineers, researchers, and scientists working at Google and OpenAI — a rare show of support from individuals tied to companies that are themselves competitors in the AI race.

Among the most prominent signatories is Jeff Dean, Google’s chief scientist and one of the most influential engineers in the modern AI ecosystem.

In high-stakes litigation involving technology and national security, courts often receive multiple amicus briefs from outside groups seeking to influence the legal debate. But the intervention of researchers connected to rival companies adds unusual weight to the filing, underscoring broader industry concerns about the implications of the government’s decision.

The brief advances three core arguments.

First, the signatories defend Anthropic’s stance on what the company has described as its “red lines” for artificial intelligence development — particularly its refusal to support technologies enabling mass surveillance or fully autonomous lethal weapons systems.

According to the filing, Anthropic was justified in maintaining those restrictions, even if they conflicted with certain government expectations regarding defense-related AI applications.

The second and third arguments focus on the broader implications for the technology sector.

The amici contend that the government’s move to label the company a supply chain risk represents an “improper and arbitrary use of power,” warning that the precedent could affect the entire AI industry if left unchecked.

They argue that punishing companies for drawing ethical boundaries around how their technology can be used could discourage responsible research and development across the sector.

Beyond Dean, the brief includes signatures from several other engineers and researchers linked to Google and OpenAI, including Grant Birkinbine, a security engineer at OpenAI; Sanjeev Dhanda, a software engineer at Google; Leo Gao, a technical staff member at OpenAI; Zach Parent, a forward-deployed engineer at OpenAI; Kathy Korevec, director of product at Google Labs; and Ian McKenzie, a research engineer at Google.

Their participation signals growing unease among AI professionals about how governments may seek to control the deployment of advanced machine-learning systems, particularly in defense and surveillance contexts.

The legal fight has unfolded against a backdrop of intensifying competition between major AI developers and rising government interest in harnessing artificial intelligence for national security purposes.

The Pentagon has increasingly sought partnerships with technology companies to develop AI tools for intelligence analysis, logistics, cybersecurity, and battlefield decision-making. At the same time, several firms have attempted to establish ethical boundaries governing the use of their models.

Anthropic’s dispute with the Pentagon appears to have emerged partly from those boundaries. The company has drawn attention for its policy restrictions aimed at preventing its AI models from being used in certain military or surveillance applications.

Those policies reportedly became a source of friction with defense officials and ultimately contributed to the controversial designation now being challenged in court.

Executives at Anthropic have warned that the blacklist could erase billions of dollars from projected revenue and disrupt relationships with corporate and government customers alike.

The controversy has also prompted public criticism from leaders of rival AI companies.

Sam Altman, chief executive of OpenAI, said shortly after the government decision became public that he believed the move was misguided.

“To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it,” Altman wrote on the social platform X in late February. “If we take heat for strongly criticizing it, so be it.”

At the same time, Altman acknowledged that his company’s own expanding relationship with the Pentagon — including a defense-related agreement announced around the same time the dispute with Anthropic escalated — “looked opportunistic and sloppy.”

The outcome of the case could carry far-reaching implications for the rapidly evolving artificial intelligence industry.

If the government’s designation is upheld, technology companies may face new pressures to align their AI policies with national security priorities in order to maintain access to government contracts and avoid regulatory scrutiny.

If Anthropic prevails, the decision could reinforce the ability of AI firms to impose their own ethical restrictions on how powerful machine-learning systems are deployed — a debate that sits at the center of the global race to shape the future of artificial intelligence.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here