Denmark Grants Copyright Over Personal Likeness to Combat Deepfakes — How Europe Is Responding
Quote from Alex bobby on June 30, 2025, 12:06 PM
Denmark Takes Aim at Deepfakes with New Copyright Law — What Other Laws Exist Across Europe?
In a bold move to tackle the growing threat of deepfakes, Denmark has announced a landmark law that grants individuals copyright protection over their own likeness—including their voice, face, and other personal characteristics. This proactive measure aims to curb the misuse of artificial intelligence (AI) in creating misleading or harmful digital content.
The law, backed by all major political parties in Denmark, marks one of the most comprehensive national responses to the deepfake phenomenon in Europe. As the technology behind deepfakes becomes increasingly advanced and accessible, other European countries are also racing to introduce legal protections. But how far have they gone?
Denmark’s Groundbreaking Law: Ownership of Your Own Image
According to Jakob Engel-Schmidt, Denmark’s Minister of Culture, the new bill sends an “unequivocal signal” that people have the right to their own body, voice, and facial features. Under the law, it will be illegal to create or share deepfake videos or “digital imitations” of someone without their consent.
Deepfakes—AI-generated videos or images that manipulate someone's appearance or voice—have often been used to spread disinformation, bully public figures, or create fake pornographic material. As Engel-Schmidt warns, technology is evolving so rapidly that it’s becoming increasingly difficult to distinguish fiction from reality. The Danish law serves both as a safeguard against misinformation and a challenge to tech platforms to step up accountability.
European Union: Transparency and Accountability Under the AI Act
At the EU level, deepfakes are being regulated under the recently adopted AI Act, which classifies artificial intelligence systems by risk level. Deepfakes are generally considered “limited risk” systems, meaning they’re not banned, but are subject to transparency obligations.
Under these rules:
- AI-generated content must be clearly labeled, often with watermarks.
- Companies must disclose training data sources and make AI processes more transparent.
- Fines for non-compliance can reach €15 million or 3% of a company’s global annual turnover—up to €35 million or 7% for prohibited practices.
In addition, a separate EU directive on violence against women criminalises non-consensual production or manipulation of sexual content using AI—including deepfake pornography. However, this directive leaves the penalty enforcement up to individual EU member states, and all have until June 2027 to implement the regulation.
France: Strict Criminal Penalties and Platform Responsibility
France has taken a firm stance by amending its criminal code in 2024 to directly address AI-manipulated media. Under the updated law:
- Sharing deepfakes without consent is illegal, and all AI-generated media must be clearly labeled.
- Offenders face up to one year in prison and a €15,000 fine—rising to two years and €45,000 if the material is shared through an online platform.
- Pornographic deepfakes are banned outright, even when clearly marked as fake. Distributors can face up to three years in prison and €75,000 in fines.
The law also grants the French audiovisual regulator, Arcom, the authority to demand removal of illicit content and force platforms to improve moderation tools.
United Kingdom: Piecemeal but Progressing
The UK has passed several laws that address deepfake pornography, though critics argue they lack comprehensive coverage.
Key legislative actions include:
- Amendments to the Data (Use and Access) Bill, targeting those who create fake sexual images to cause harm. Violators may face an unlimited fine.
- A two-year prison sentence under the Sexual Offences Act for creating sexual deepfakes.
- The Online Safety Act, which obligates platforms to proactively detect and remove non-consensual sexual content. Companies that fail to comply may be fined up to 10% of their global revenue.
However, the creation of deepfakes, even sexual ones, is not fully outlawed unless they are shared or intended to be shared. Legal experts, such as Professor Julia Hörnle from Queen Mary University, warn that the UK’s approach leaves victims vulnerable, particularly as the tools to create deepfakes remain widely available.
What’s Next for Europe?
Denmark’s move could set a new benchmark for digital rights in the AI era, pushing other countries to treat likeness and voice as personal intellectual property. While the EU’s AI Act provides a broad regulatory framework, it relies heavily on transparency and doesn’t go as far as criminalising deepfake creation in all contexts.
France and the UK have taken stronger steps on deepfake pornography, but inconsistencies remain in how countries define and penalise such content.
In the end, the legal battle against deepfakes is still in its early stages. As AI-generated content becomes more realistic and easier to produce, experts say that a unified, enforceable European legal standard may be the only way to protect citizens from manipulation, defamation, and exploitation in the digital age.
Conclusion: A Growing Legal Front Against Deepfakes in Europe
Denmark’s new law granting individuals copyright over their own likeness marks a significant step in protecting citizens from the growing threat of deepfakes. It sets a powerful precedent for treating one’s voice, face, and identity as intellectual property in the digital age. While the European Union, France, and the UK have all taken steps to regulate or criminalise harmful AI-generated content—especially in cases involving sexual exploitation—gaps and inconsistencies still remain across jurisdictions.
As deepfake technology continues to evolve, so too must legal frameworks. What’s clear is that protecting people from digital impersonation is no longer just a privacy issue—it’s about preserving trust, safety, and dignity in an increasingly AI-driven world. Stronger, more unified legal protections across Europe will be essential to combat misuse, hold platforms accountable, and ensure that innovation does not come at the cost of individual rights.
Meta Description (SEO):
Denmark will give people copyright over their own likeness to fight deepfakes. Explore how France, the UK, and the EU are tackling AI-generated media through new laws.

Denmark Takes Aim at Deepfakes with New Copyright Law — What Other Laws Exist Across Europe?
In a bold move to tackle the growing threat of deepfakes, Denmark has announced a landmark law that grants individuals copyright protection over their own likeness—including their voice, face, and other personal characteristics. This proactive measure aims to curb the misuse of artificial intelligence (AI) in creating misleading or harmful digital content.
The law, backed by all major political parties in Denmark, marks one of the most comprehensive national responses to the deepfake phenomenon in Europe. As the technology behind deepfakes becomes increasingly advanced and accessible, other European countries are also racing to introduce legal protections. But how far have they gone?
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Denmark’s Groundbreaking Law: Ownership of Your Own Image
According to Jakob Engel-Schmidt, Denmark’s Minister of Culture, the new bill sends an “unequivocal signal” that people have the right to their own body, voice, and facial features. Under the law, it will be illegal to create or share deepfake videos or “digital imitations” of someone without their consent.
Deepfakes—AI-generated videos or images that manipulate someone's appearance or voice—have often been used to spread disinformation, bully public figures, or create fake pornographic material. As Engel-Schmidt warns, technology is evolving so rapidly that it’s becoming increasingly difficult to distinguish fiction from reality. The Danish law serves both as a safeguard against misinformation and a challenge to tech platforms to step up accountability.
European Union: Transparency and Accountability Under the AI Act
At the EU level, deepfakes are being regulated under the recently adopted AI Act, which classifies artificial intelligence systems by risk level. Deepfakes are generally considered “limited risk” systems, meaning they’re not banned, but are subject to transparency obligations.
Under these rules:
- AI-generated content must be clearly labeled, often with watermarks.
- Companies must disclose training data sources and make AI processes more transparent.
- Fines for non-compliance can reach €15 million or 3% of a company’s global annual turnover—up to €35 million or 7% for prohibited practices.
In addition, a separate EU directive on violence against women criminalises non-consensual production or manipulation of sexual content using AI—including deepfake pornography. However, this directive leaves the penalty enforcement up to individual EU member states, and all have until June 2027 to implement the regulation.
France: Strict Criminal Penalties and Platform Responsibility
France has taken a firm stance by amending its criminal code in 2024 to directly address AI-manipulated media. Under the updated law:
- Sharing deepfakes without consent is illegal, and all AI-generated media must be clearly labeled.
- Offenders face up to one year in prison and a €15,000 fine—rising to two years and €45,000 if the material is shared through an online platform.
- Pornographic deepfakes are banned outright, even when clearly marked as fake. Distributors can face up to three years in prison and €75,000 in fines.
The law also grants the French audiovisual regulator, Arcom, the authority to demand removal of illicit content and force platforms to improve moderation tools.
United Kingdom: Piecemeal but Progressing
The UK has passed several laws that address deepfake pornography, though critics argue they lack comprehensive coverage.
Key legislative actions include:
- Amendments to the Data (Use and Access) Bill, targeting those who create fake sexual images to cause harm. Violators may face an unlimited fine.
- A two-year prison sentence under the Sexual Offences Act for creating sexual deepfakes.
- The Online Safety Act, which obligates platforms to proactively detect and remove non-consensual sexual content. Companies that fail to comply may be fined up to 10% of their global revenue.
However, the creation of deepfakes, even sexual ones, is not fully outlawed unless they are shared or intended to be shared. Legal experts, such as Professor Julia Hörnle from Queen Mary University, warn that the UK’s approach leaves victims vulnerable, particularly as the tools to create deepfakes remain widely available.
What’s Next for Europe?
Denmark’s move could set a new benchmark for digital rights in the AI era, pushing other countries to treat likeness and voice as personal intellectual property. While the EU’s AI Act provides a broad regulatory framework, it relies heavily on transparency and doesn’t go as far as criminalising deepfake creation in all contexts.
France and the UK have taken stronger steps on deepfake pornography, but inconsistencies remain in how countries define and penalise such content.
In the end, the legal battle against deepfakes is still in its early stages. As AI-generated content becomes more realistic and easier to produce, experts say that a unified, enforceable European legal standard may be the only way to protect citizens from manipulation, defamation, and exploitation in the digital age.
Conclusion: A Growing Legal Front Against Deepfakes in Europe
Denmark’s new law granting individuals copyright over their own likeness marks a significant step in protecting citizens from the growing threat of deepfakes. It sets a powerful precedent for treating one’s voice, face, and identity as intellectual property in the digital age. While the European Union, France, and the UK have all taken steps to regulate or criminalise harmful AI-generated content—especially in cases involving sexual exploitation—gaps and inconsistencies still remain across jurisdictions.
As deepfake technology continues to evolve, so too must legal frameworks. What’s clear is that protecting people from digital impersonation is no longer just a privacy issue—it’s about preserving trust, safety, and dignity in an increasingly AI-driven world. Stronger, more unified legal protections across Europe will be essential to combat misuse, hold platforms accountable, and ensure that innovation does not come at the cost of individual rights.
Meta Description (SEO):
Denmark will give people copyright over their own likeness to fight deepfakes. Explore how France, the UK, and the EU are tackling AI-generated media through new laws.
Share this:
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on X (Opens in new window) X
- Click to share on WhatsApp (Opens in new window) WhatsApp
- Click to share on LinkedIn (Opens in new window) LinkedIn
- Click to email a link to a friend (Opens in new window) Email
- Click to print (Opens in new window) Print



