The administration of Donald Trump is preparing sweeping new rules governing how artificial intelligence companies can do business with the federal government, escalating a standoff with leading AI developer Anthropic and signaling a broader shift in Washington’s approach to the rapidly evolving technology.
The proposed policy, drafted by the General Services Administration (GSA), would require companies seeking U.S. government contracts to grant federal agencies an irrevocable license to use their AI systems for “any lawful purpose,” according to a report by the Financial Times.
The rules would apply to civilian government contracts but mirror measures now being considered by the U.S. Department of Defense for military-related AI deployments, highlighting how disputes over model safeguards are beginning to shape federal procurement policy.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The effort comes after the Pentagon declared Anthropic a “supply-chain risk,” effectively preventing defense contractors from using the company’s AI technology in military projects.
The conflict grew out of months of tension between Anthropic and defense officials over safety restrictions built into the company’s AI systems. Anthropic, known for emphasizing safety and alignment in its models, has argued that guardrails limiting certain uses of AI are essential to prevent misuse of powerful systems.
Defense officials, however, have pushed back, arguing that such constraints could limit the military’s ability to deploy AI tools in intelligence analysis, cybersecurity operations, and battlefield decision-making. The Pentagon’s designation of Anthropic as a supply-chain risk marked an unusually direct confrontation between a major AI developer and U.S. defense authorities.
The move also underlines growing concern within national security circles that technology providers could restrict the government’s operational flexibility by embedding policy constraints into their software.
The dispute quickly spilled into civilian government procurement.
According to Josh Gruenbaum, the GSA has terminated Anthropic’s participation in the government’s OneGov contracting program — a centralized procurement platform that allows federal agencies across the executive, legislative, and judicial branches to access pre-negotiated technology contracts.
“It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic,” Gruenbaum said.
“As directed by the President, GSA has terminated Anthropic’s OneGov deal — ending their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts.”
The decision effectively shuts Anthropic out of a large segment of federal AI procurement unless the dispute is resolved.
Beyond the licensing provisions, the draft rules also impose new requirements aimed at ensuring neutrality in AI outputs used by federal agencies. Under the proposed guidelines, contractors must ensure their systems do not intentionally embed partisan or ideological judgments in the information they generate.
Companies will also be required to disclose whether their models have been modified to comply with regulatory frameworks outside the U.S. federal government — a provision likely aimed at identifying systems shaped by foreign regulations or corporate compliance standards.
Such disclosures could become increasingly important as AI developers operate globally and adapt their models to different regulatory environments.
Government becomes a dominant AI customer
The policy shift highlights the rapidly expanding role of artificial intelligence across the federal government. From intelligence gathering and cybersecurity monitoring to logistics and administrative automation, agencies are increasingly integrating AI systems into their daily operations.
The Pentagon in particular has accelerated its adoption of AI in recent years, viewing advanced machine-learning systems as essential tools in modern warfare and strategic competition. Defense planners believe that AI technologies could transform everything from battlefield surveillance to real-time decision-making in military operations.
The clash between Washington and Anthropic illustrates a deeper tension that is emerging across the technology sector: who ultimately controls the use of powerful AI systems.
Technology companies have increasingly introduced safeguards designed to prevent harmful or controversial uses of their models. But governments — particularly those focused on national security — often seek broader authority to deploy such tools in sensitive or classified contexts.
The new procurement rules suggest the U.S. government intends to assert clear authority over how AI systems can be used once they are purchased by federal agencies.
The implications are rising for AI companies.
Government contracts represent a rapidly growing market as public institutions adopt artificial intelligence at scale. Yet the new rules point out that firms seeking access to that market may need to relinquish some control over how their technology is used.
The dispute also pinpoints the growing importance of artificial intelligence in global geopolitics. As countries race to develop advanced AI capabilities, governments increasingly view the technology as a cornerstone of economic competitiveness and national security.
Washington’s push to secure broad usage rights over AI systems is seen as a sign that policymakers see unrestricted access to the technology as critical for maintaining technological leadership.



