Google has confirmed it will sign the European Union’s general-purpose AI code of practice, a voluntary framework meant to help developers of powerful AI models align with the bloc’s upcoming AI Act.
The move sets Google apart from Meta, which earlier this month refused to endorse the code, calling it overreaching and harmful to Europe’s AI prospects.
The decision by Google comes just days before August 2, when new EU rules for providers of “general-purpose AI models with systemic risk” are scheduled to take effect. These rules apply to major players like Google, Meta, OpenAI, Anthropic, and others building or deploying large-scale generative models. While the AI Act gives these companies two years to fully comply, the EU code of practice acts as a transitional mechanism to encourage best practices ahead of enforcement.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
In a blog post published Wednesday, Kent Walker, Google’s President of Global Affairs, acknowledged that the final version of the code was an improvement from the original draft, but said the company still holds “serious reservations.” He warned that the AI Act and its accompanying code could hinder innovation, citing concerns over deviations from EU copyright law, slowed approval timelines, and exposure of proprietary trade secrets.
“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” Walker wrote.
Despite those reservations, Google is going forward with its commitment to sign the code, making it one of the first among the world’s top AI firms to publicly declare support for the EU framework. Signing the code binds AI developers to a list of expectations, including keeping updated documentation on their AI models, avoiding the use of pirated content in training datasets, and responding to content owners who do not wish to have their work used to train AI.
Meta’s refusal to sign has drawn a sharp line within the AI industry. The company described the code as legally questionable and accused the EU of creating obligations that go beyond the AI Act’s legal framework. Meta also criticized what it called Europe’s “wrong path on AI,” arguing that such regulations may discourage companies from building foundational AI systems in the region. This sentiment reflects broader tensions between U.S. tech giants and European regulators, especially as the EU takes the lead globally in attempting to place guardrails on AI.
The EU’s AI Act itself is a sweeping risk-based regulation. It bans certain “unacceptable risk” uses of AI, including manipulative behavioral systems and social scoring, while placing strict controls on “high-risk” applications like facial recognition, biometrics, education, and employment. Developers of such systems will be required to register their models, conduct risk assessments, and meet transparency and quality management obligations. Violators face stiff penalties, including fines of up to 7% of global turnover.
While the code of practice is not legally binding, it offers a glimpse of how the EU plans to interpret and enforce the broader rules of the AI Act. Companies that sign the code are expected to benefit from greater legal clarity and reduced regulatory friction, particularly during the transition period before the full force of the law kicks in.
Google’s decision to align with the EU—despite ongoing misgivings—signals a cautious but strategic embrace of regulatory cooperation. Meta’s defiance, on the other hand, highlights deep industry fractures over how best to balance innovation with accountability in the fast-evolving AI space.
It is not clear for now whether other U.S. firms like Microsoft, OpenAI, or Anthropic will follow Google’s lead or side with Meta. What’s clear is that the EU’s push to rein in AI through comprehensive rules is no longer theoretical. The regulatory future is arriving, and companies must now choose their path.



