OpenAI CEO Sam Altman acknowledged Monday that the company “shouldn’t have rushed” its recent agreement with the U.S. Department of Defense, announcing revisions to the contract that incorporate stronger safeguards on surveillance and lethal autonomy — language closely mirroring Anthropic’s red lines that led to its standoff with the Pentagon.
In a reposted internal memo shared on X, Altman outlined amendments clarifying OpenAI’s principles, including explicit prohibitions on using its AI systems for “domestic surveillance of U.S. persons and nationals.”
The updated language states: “The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” with the Defense Department affirming that this “prohibits deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The revisions also confirm that OpenAI’s tools will not be used by intelligence agencies such as the NSA. Altman emphasized technical safeguards will be built to ensure model behavior aligns with these commitments, adding: “There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety.”
The original deal, announced Friday, February 27, 2026, came hours after Defense Secretary Pete Hegseth directed federal agencies to phase out Anthropic’s Claude tools over six months, threatening “major civil and criminal consequences” if the company did not assist. Anthropic had refused Pentagon demands for unrestricted military use, particularly citing concerns over mass domestic surveillance and fully autonomous weapons lacking human oversight.
Altman admitted the Friday announcement timing “looked opportunistic and sloppy,” explaining internally that the company aimed to “de-escalate things and avoid a much worse outcome” amid escalating pressure on AI firms.
He reiterated in the memo: “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to.”
The rapid sequence of events — Anthropic’s refusal, OpenAI’s deal announcement, and Trump’s directive against Anthropic — triggered significant consumer backlash. Claude overtook ChatGPT as the top free app on Apple’s U.S. App Store late Friday, with many users citing ethical concerns as their reason for switching.
Anthropic’s Earlier Stance and Pentagon Pressure
Anthropic had been the first major AI lab to deploy models across the Pentagon’s classified network under a prior agreement. Months of talks broke down after the company sought guarantees against use for domestic mass surveillance or autonomous weapons without human control. CEO Dario Amodei publicly stated Thursday that Anthropic “cannot in good conscience accede” to demands lacking these protections.
The Pentagon, through spokesman Sean Parnell, insisted it seeks only “lawful purposes” and has no interest in illegal mass surveillance of Americans or fully autonomous lethal systems. However, Defense Undersecretary Emil Michael accused Amodei of having a “God-complex” and risking national security.
Industry and Political Reactions
The situation has exposed deep tensions between frontier AI labs and national security imperatives. Sen. Thom Tillis (R-NC) criticized the Pentagon’s public handling as unprofessional, while Sen. Mark Warner (D-VA) called it evidence of disregard for AI governance, urging Congress to enact binding rules for national security contexts.
OpenAI’s revisions appear designed to defuse backlash while preserving the Pentagon partnership. Altman’s acknowledgment of rushing the deal and advocacy for Anthropic suggest an attempt to reposition OpenAI as collaborative rather than opportunistic.
The saga highlights the challenges of AI labs in balancing commercial/government partnerships with ethical red lines. Many see Anthropic’s refusal to Pentagon and the resulting consumer surge as evidence that principled stands can translate into market advantage in a privacy-conscious user base.
However, it means that the U.S. will not have it easy securing unrestricted access to frontier AI models, which the government has touted for military advancement. The Pentagon’s threats of contract cancellation and supply-chain risk designations have raised alarms about government overreach into private-sector ethics.
How the saga pans out is expected to influence industry norms around military use of AI.



