A growing number of current and former employees at OpenAI and Google have signed a joint petition opposing the unrestricted deployment of their companies’ AI technologies for mass surveillance or fully autonomous weapons that can kill without human oversight.
Titled “We Will Not Be Divided,” the online petition — launched in early February 2026 — invites verified employees from both firms to publicly declare their stance, with the option to remain anonymous.
As of Friday, more than 220 individuals had signed: 176 from Google and 47 from OpenAI. Google employs approximately 187,000 people globally (mid-2025 figures), while OpenAI’s headcount runs into the thousands. The petition’s relatively modest numbers belie its significance as a rare public act of dissent from within two of the world’s most influential AI labs.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The petition explicitly references pressure from the Department of Defense (referred to as the “Department of War” in the text) to provide military access to AI models. It claims the Pentagon has threatened to invoke the Defense Production Act (DPA) to force Anthropic — another major AI developer — to tailor its technology to military needs, warning of labeling the company a “supply chain risk” if it refuses.
“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the petition states.
The document accuses the Defense Department of a “divide and conquer” strategy: pitting companies against each other by implying competitors will comply if one refuses.
“That strategy only works if none of us know where the others stand,” it reads. “This letter serves to create shared understanding and solidarity in the face of this pressure.”
The signers call on OpenAI and Google leadership to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
Context: Pentagon Pressure and Anthropic’s Stance
The petition follows Axios’ Tuesday report that Defense Secretary Pete Hegseth set a deadline for Anthropic CEO Dario Amodei to grant the military sweeping access to Claude models, threatening contract cancellation or further action if refused.
A Defense official told Axios: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”
Anthropic responded Thursday with a firm public refusal: “We cannot in good conscience accede” to demands allowing unrestricted military use, particularly for mass surveillance of Americans or autonomous weapons lacking human oversight.”
Amodei noted new contract language from the Pentagon “made virtually no progress” on these red lines.
Pentagon spokesman Sean Parnell countered on social media, saying “The military has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
He insisted the department seeks only “lawful purposes” but offered no specifics.
Defense Undersecretary for Research and Engineering Emil Michael escalated the rhetoric, posting on X that Amodei “has a God-complex” and is “ok putting our nation’s safety at risk.”
Experts view the Pentagon’s approach as unprecedented. Dean Ball, former senior policy advisor in the White House Office of Science and Technology Policy and fellow at the Foundation for American Innovation, told Business Insider: “We’re absolutely in uncharted territory. What are the stakes for Anthropic? I mean, Anthropic could be quasi-nationalized, or they could be driven out of business. The stakes are huge for them.”
Ball warned the episode sends a chilling signal to the tech industry that “doing business with the government is extremely dangerous.”
Sen. Thom Tillis (R-NC) criticized the Pentagon’s handling as unprofessional.
“Why in the hell are we having this discussion in public? This is not the way you deal with a strategic vendor that has contracts,” he said, urging for a closed-door resolution.
Sen. Mark Warner (D-VA), ranking member on the Senate Intelligence Committee, expressed deep disturbance.
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance. It further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts,” he said.
OpenAI, Google, Anthropic, and xAI all maintain contracts or discussions with the Pentagon for military AI applications. Anthropic has been the most public in resisting unrestricted use, citing policies against mass surveillance and autonomous weapons. The petition underlines a rare cross-company alliance among employees concerned about ethical boundaries.
Hegseth has pushed for faster AI deployment, describing it as a “wartime arms race” during a January visit to Elon Musk’s SpaceX. He has also criticized military legal advisors as potential “roadblocks,” leading to high-profile dismissals of top Army and Air Force lawyers in early 2026.
The clash highlights tensions between rapid military adoption of frontier AI and calls for governance, transparency, and human oversight — especially in sensitive areas like surveillance and lethal autonomy.
The petition and Anthropic’s stance are expected to embolden further employee activism across AI labs. Congressional attention — from Tillis and Warner — suggests growing bipartisan interest in formal AI governance for national security applications. For OpenAI and Google, the petition adds internal pressure at a time when both companies face scrutiny over military ties. Anthropic’s refusal has positioned it as a leader in setting red lines, potentially influencing industry norms.
The coming weeks will test whether the Pentagon softens its demands or escalates pressure through contract leverage or DPA invocation.



