Home Latest Insights | News Requiring Artists’ Consent Before Training AI Systems On Their Work Would “Basically Kill” The Industry In The UK – Nick Clegg

Requiring Artists’ Consent Before Training AI Systems On Their Work Would “Basically Kill” The Industry In The UK – Nick Clegg

Requiring Artists’ Consent Before Training AI Systems On Their Work Would “Basically Kill” The Industry In The UK – Nick Clegg

Nick Clegg, former UK deputy prime minister and now a top executive at Meta, has intensified the growing debate over intellectual property rights in artificial intelligence development by declaring that requiring artists’ consent before training AI systems on their work would “basically kill” the industry in the UK.

Clegg made the statement while promoting his new book at the Charleston Festival, where he acknowledged that the creative community should have the right to opt out of AI training datasets—but rejected calls for a consent-first approach.

“Quite a lot of voices say, ‘You can only train on my content [if you] first ask.’ And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data,” Clegg said. “I just don’t know how you go around asking everyone first. I just don’t see how that would work. And by the way, if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

His comments come as UK lawmakers debate legislation that would compel technology firms to disclose which copyrighted works they have used to train their AI models—a push driven by concerns from artists, musicians, writers, and other creators that their intellectual property is being exploited without permission or compensation.

Parliament Pushes Back on Consent Amendment

The proposed amendment to the Data (Use and Access) Bill, introduced by Baroness Beeban Kidron, herself a filmmaker, would force AI companies to reveal which copyrighted works they use, thereby allowing creators to protect their work under existing laws. But despite vocal support from high-profile artists such as Paul McCartney, Elton John, Dua Lipa, and Ian McKellen, the amendment was rejected last week by the House of Commons.

UK Secretary of State for Science, Innovation and Technology Peter Kyle defended the rejection, saying the country must avoid a regulatory environment that pits AI development against creative industries.

“Britain’s economy needs both sectors to succeed and to prosper,” Kyle said, warning that overregulation could stifle AI growth in the UK.

However, creative sector leaders argue the core issue is fairness, not technological innovation. Without transparency and enforceable rights, they warn, AI developers are essentially stealing their work. Kidron wrote in an op-ed that the aim of the amendment was to give artists “visibility into what happens to their work” and ensure that AI models “do not train in secret, behind closed doors, on content that has been created over years, even decades.”

“The fight isn’t over yet,” Kidron added, confirming that the bill will return to the House of Lords in early June, where efforts to revive the transparency clause are already underway.

Escalating Global Dispute Over AI and Intellectual Property

Clegg’s remarks, delivered on stage in a room of creators, have drawn backlash not only from lawmakers but from the broader creative community. They view his dismissal of consent-based protections as a reflection of the tech industry’s longstanding reluctance to address how generative AI systems exploit copyrighted material.

This controversy is far from unique to the UK. Globally, AI developers face mounting lawsuits and public scrutiny over how training data is sourced. Authors in the U.S., artists in Europe, and news organizations in Asia have all raised concerns that AI models trained on copyrighted works are being commercialized without compensation or even acknowledgment.

In this context, Clegg’s assertion that the UK would be at a disadvantage by enforcing stronger protections is being seen by many creatives as a defense of an exploitative system. For artists, the concern is not just about the principle of consent, but about power—tech companies, they argue, are profiting from the creative labor of others while shielding the workings of their AI models from scrutiny.

A Fight Far From Over

As the AI industry continues to evolve at a blistering pace, the collision between innovation and rights is no longer hypothetical—it is already here. The UK, like many other countries, is being forced to choose between a tech-first approach and one that centers on transparency and fairness.

Clegg’s assertion may have been intended as a pragmatic warning, but to many in the creative sector, it’s the latest reminder of what they see as a pattern of technology companies claiming they cannot survive if held accountable for the content they use.

The return of the Data Bill to the House of Lords in June will be a pivotal moment. The outcome could determine whether the UK positions itself as a leader in ethical AI development or just another jurisdiction where creators’ rights are eroded in the name of innovation.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here