Google to Sign EU’s AI Code Despite Concerns Over Innovation and Copyright Risks
Quote from Alex bobby on July 30, 2025, 10:22 AM
In a significant step for the future of artificial intelligence regulation in Europe, Google has confirmed it will sign the EU’s AI Code of Practice on General Purpose AI (GPAI) — even as it voices serious concerns about the potential chilling effects on innovation and development. The move sets Google apart from some of its American tech peers, most notably Meta, which has publicly rejected the Code.
The voluntary AI Code, released by the European Commission in early July, is designed to prepare companies for full compliance with the EU AI Act, the bloc’s sweeping legislation to regulate the development and deployment of artificial intelligence. Although not legally binding, the Code offers tech firms the advantage of greater regulatory clarity and predictability, while those who opt out could face increased inspections and legal scrutiny.
A Calculated Commitment
Google’s decision, announced in a statement from Kent Walker, President of Global Affairs at Alphabet (Google’s parent company), reflects a pragmatic approach. The company is clearly keen to maintain influence in Europe’s AI policy environment and signal cooperation with regulators — even as it raises red flags over elements of the rules it finds problematic.
“While the final version of the Code comes closer to supporting Europe’s innovation and economic goals, we remain concerned that the AI Act and Code risk slowing down Europe’s development and deployment of AI,” said Walker in a blog post.
He specifically called out provisions that could depart from EU copyright law, slow down product approvals, or force companies to expose trade secrets, warning that these elements could "chill European model development" and harm competitiveness in a fast-moving global AI race.
The Code and the AI Act: What’s at Stake?
The AI Code of Practice is part of a larger regulatory framework aimed at making the EU a global leader in ethical and safe AI deployment. It sets out voluntary guidelines for transparency, copyright obligations, data governance, model evaluations, and cybersecurity.
Although optional, the Code is considered a preparatory tool for companies to align with the AI Act, which officially takes effect for GPAI systems on 2 August 2025. Companies that already have AI tools on the market will have two years to implement the new rules, while those launching tools after that date must be compliant immediately.
For companies like Google, this means making early adjustments in areas such as model transparency, intellectual property rights, and user safeguards. Signing the Code signals a commitment to responsible development — and also helps reduce the regulatory burden by promising fewer inspections and better legal certainty.
Meta’s Pushback and Copyright Concerns
Not every tech giant is on board. Meta, the company behind Facebook and Instagram, has refused to sign the Code, claiming that the current framework would stifle innovation and put unfair pressure on companies developing foundational AI models.
Meta’s criticism aligns with growing concerns from rightsholders and creative industry groups, who worry that parts of the AI Code — especially those tied to data use and training — may violate copyright protections or shift too much responsibility onto model developers. These groups have criticised the Commission’s drafting process, claiming it overlooked crucial copyright safeguards and failed to properly address licensing challenges for training data.
While Google has echoed some of these worries, particularly regarding potential overreach into proprietary information, it appears more willing to work within the system to shape outcomes from the inside.
Industry Divides and Regulatory Tensions
The divergence between Google and Meta reflects broader tensions within the tech industry over how best to manage AI development. On one side are companies prioritising collaboration with regulators, even at the cost of short-term flexibility. On the other are those that fear overregulation could hinder technological progress and innovation in an increasingly competitive field.
For European regulators, the stakes are high. They are walking a tightrope between creating a trusted, secure AI ecosystem and ensuring that Europe remains an attractive hub for tech investment. Striking that balance is difficult when even cooperative firms like Google are warning that parts of the AI Code and AI Act could have unintended consequences.
The Road Ahead
The European Commission will publish the list of companies that have signed the Code on 1 August, just one day before the AI Act’s GPAI provisions come into force. Google’s name will be among them — a signal that, despite reservations, the company is choosing to engage constructively.
In its statement, Google emphasised its commitment to work closely with the new EU AI Office, the body tasked with overseeing enforcement of the AI Act. The company said it aims to ensure that the Code is “proportionate and responsive to the rapid and dynamic evolution of AI.”
This ongoing collaboration could help shape more balanced policies in the future — but only if regulators are willing to adjust and adapt based on feedback from both supporters and critics of the Code.
Conclusion
Google’s decision to sign the EU’s AI Code of Practice represents a strategic move to stay engaged with European regulators, despite lingering concerns over copyright, innovation, and data protection. As the EU positions itself as a global standard-setter in AI governance, the choices made by major tech firms like Google and Meta will shape the pace and direction of artificial intelligence development on the continent.
While the Code is voluntary, it sets the tone for how the AI Act will be implemented — and the stakes for compliance are high. Whether this regulatory approach will foster innovation or create friction remains to be seen, but one thing is clear: the AI policy debate in Europe is just beginning.
Meta Description: Google will sign the EU’s AI Code of Practice for General Purpose AI, offering support despite concerns over regulatory impact on innovation, copyright, and trade secrets.

In a significant step for the future of artificial intelligence regulation in Europe, Google has confirmed it will sign the EU’s AI Code of Practice on General Purpose AI (GPAI) — even as it voices serious concerns about the potential chilling effects on innovation and development. The move sets Google apart from some of its American tech peers, most notably Meta, which has publicly rejected the Code.
The voluntary AI Code, released by the European Commission in early July, is designed to prepare companies for full compliance with the EU AI Act, the bloc’s sweeping legislation to regulate the development and deployment of artificial intelligence. Although not legally binding, the Code offers tech firms the advantage of greater regulatory clarity and predictability, while those who opt out could face increased inspections and legal scrutiny.
A Calculated Commitment
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Google’s decision, announced in a statement from Kent Walker, President of Global Affairs at Alphabet (Google’s parent company), reflects a pragmatic approach. The company is clearly keen to maintain influence in Europe’s AI policy environment and signal cooperation with regulators — even as it raises red flags over elements of the rules it finds problematic.
“While the final version of the Code comes closer to supporting Europe’s innovation and economic goals, we remain concerned that the AI Act and Code risk slowing down Europe’s development and deployment of AI,” said Walker in a blog post.
He specifically called out provisions that could depart from EU copyright law, slow down product approvals, or force companies to expose trade secrets, warning that these elements could "chill European model development" and harm competitiveness in a fast-moving global AI race.
The Code and the AI Act: What’s at Stake?
The AI Code of Practice is part of a larger regulatory framework aimed at making the EU a global leader in ethical and safe AI deployment. It sets out voluntary guidelines for transparency, copyright obligations, data governance, model evaluations, and cybersecurity.
Although optional, the Code is considered a preparatory tool for companies to align with the AI Act, which officially takes effect for GPAI systems on 2 August 2025. Companies that already have AI tools on the market will have two years to implement the new rules, while those launching tools after that date must be compliant immediately.
For companies like Google, this means making early adjustments in areas such as model transparency, intellectual property rights, and user safeguards. Signing the Code signals a commitment to responsible development — and also helps reduce the regulatory burden by promising fewer inspections and better legal certainty.
Meta’s Pushback and Copyright Concerns
Not every tech giant is on board. Meta, the company behind Facebook and Instagram, has refused to sign the Code, claiming that the current framework would stifle innovation and put unfair pressure on companies developing foundational AI models.
Meta’s criticism aligns with growing concerns from rightsholders and creative industry groups, who worry that parts of the AI Code — especially those tied to data use and training — may violate copyright protections or shift too much responsibility onto model developers. These groups have criticised the Commission’s drafting process, claiming it overlooked crucial copyright safeguards and failed to properly address licensing challenges for training data.
While Google has echoed some of these worries, particularly regarding potential overreach into proprietary information, it appears more willing to work within the system to shape outcomes from the inside.
Industry Divides and Regulatory Tensions
The divergence between Google and Meta reflects broader tensions within the tech industry over how best to manage AI development. On one side are companies prioritising collaboration with regulators, even at the cost of short-term flexibility. On the other are those that fear overregulation could hinder technological progress and innovation in an increasingly competitive field.
For European regulators, the stakes are high. They are walking a tightrope between creating a trusted, secure AI ecosystem and ensuring that Europe remains an attractive hub for tech investment. Striking that balance is difficult when even cooperative firms like Google are warning that parts of the AI Code and AI Act could have unintended consequences.
The Road Ahead
The European Commission will publish the list of companies that have signed the Code on 1 August, just one day before the AI Act’s GPAI provisions come into force. Google’s name will be among them — a signal that, despite reservations, the company is choosing to engage constructively.
In its statement, Google emphasised its commitment to work closely with the new EU AI Office, the body tasked with overseeing enforcement of the AI Act. The company said it aims to ensure that the Code is “proportionate and responsive to the rapid and dynamic evolution of AI.”
This ongoing collaboration could help shape more balanced policies in the future — but only if regulators are willing to adjust and adapt based on feedback from both supporters and critics of the Code.
Conclusion
Google’s decision to sign the EU’s AI Code of Practice represents a strategic move to stay engaged with European regulators, despite lingering concerns over copyright, innovation, and data protection. As the EU positions itself as a global standard-setter in AI governance, the choices made by major tech firms like Google and Meta will shape the pace and direction of artificial intelligence development on the continent.
While the Code is voluntary, it sets the tone for how the AI Act will be implemented — and the stakes for compliance are high. Whether this regulatory approach will foster innovation or create friction remains to be seen, but one thing is clear: the AI policy debate in Europe is just beginning.
Meta Description: Google will sign the EU’s AI Code of Practice for General Purpose AI, offering support despite concerns over regulatory impact on innovation, copyright, and trade secrets.
Uploaded files:Share this:
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on X (Opens in new window) X
- Click to share on WhatsApp (Opens in new window) WhatsApp
- Click to share on LinkedIn (Opens in new window) LinkedIn
- Click to email a link to a friend (Opens in new window) Email
- Click to print (Opens in new window) Print



