Home Latest Insights | News Google Quietly Drops Pledge Against AI for Weapons and Surveillance, Raising Concerns

Google Quietly Drops Pledge Against AI for Weapons and Surveillance, Raising Concerns

Google Quietly Drops Pledge Against AI for Weapons and Surveillance, Raising Concerns

Google has quietly removed a key commitment from its AI principles, deleting a pledge not to develop artificial intelligence for weapons or surveillance.

The change, first noticed by Bloomberg, reflects a shift in the tech giant’s stance on military and security applications of AI—a shift that could have profound implications for global AI ethics.

The updated version of Google’s AI policy, published Tuesday, now emphasizes that the company pursues AI “responsibly” and by “widely accepted principles of international law and human rights.” However, it no longer explicitly states that Google will avoid developing AI for weapons or mass surveillance.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

The revised policy was introduced in a blog post by Demis Hassabis, the head of Google DeepMind, and James Manyika, senior vice president of research labs. They framed the update as part of Google’s belief that “democracies should lead in AI development” and that AI should be built in collaboration with governments and organizations that uphold values such as “freedom, equality, and respect for human rights.”

A Shift in Google’s AI Ethics?

For years, Google had committed to avoiding AI applications that could cause harm. The previous version of its AI Principles included a section titled “applications we will not pursue,” which explicitly ruled out AI for weapons and surveillance that violated “internationally accepted norms.”

That commitment has now disappeared. While Google has not provided a direct explanation for its removal, the change aligns with the company’s growing involvement in military and security-related AI projects.

Google has faced internal protests from employees over its contracts with the U.S. Department of Defense and the Israeli military, particularly in the areas of cloud computing and AI. The company has consistently maintained that its technology is not used to harm humans—but recent revelations challenge that claim.

The Pentagon’s AI chief recently told TechCrunch that some companies’ AI models, including Google’s, are helping accelerate the U.S. military’s “kill chain”—the process by which targets are identified and engaged in combat operations.

The removal of the anti-weapons and anti-surveillance pledge is already sparking backlash from digital rights groups, AI ethics researchers, and some Google employees.

Google was one of the few major AI companies that had made a clear commitment not to develop AI for warfare. Some believe that walking back that commitment suggests a prioritization of profit and power over ethical responsibility.

Others argue that Google’s new AI policy is vague, replacing concrete commitments with broad, subjective language about “international law and human rights”—a standard that is open to interpretation and could allow the company to justify nearly any AI application.

Growing Involvement in National Security AI

Google’s softened AI stance may reflect growing pressure from Washington to ensure that leading U.S. tech companies remain competitive in the global AI race—especially against China.

The U.S. government has been increasingly focused on integrating AI into military strategy, and tech firms like Google, Microsoft, and Amazon have been expanding their roles in national security.

What This Means for AI’s Future

Google’s decision to quietly remove its pledge raises critical questions about the future of AI ethics.

  • Will other tech giants follow suit and relax their AI ethics commitments?
  • How will governments regulate AI applications for military and surveillance use?
  • Will employees within AI companies push back against these shifts, as they did during Google’s earlier involvement in Project Maven, a Pentagon AI program?

For now, Google’s updated AI principles suggest a growing willingness to engage in AI projects that serve national security interests—even if it means abandoning previous ethical commitments.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here