Thousands of AI-Generated Sexualised Videos of Minors Found on TikTok, Raising Child Safety Concerns
Quote from Alex Bobby on December 14, 2025, 4:53 AM
A report by Maldita reveals thousands of AI-generated sexualised videos featuring minors on TikTok, highlighting links to child exploitation networks and raising urgent concerns about online safety, moderation, and regulation.
Artificial intelligence (AI) has transformed the way content is created and shared online, enabling a wave of creativity and innovation. However, a recent report by the Spanish fact-checking organisation Maldita has highlighted a dark and deeply troubling misuse of this technology on TikTok. According to the report, thousands of AI-generated videos featuring sexualised depictions of minors are circulating on the platform, raising urgent questions about the effectiveness of moderation, the ethical responsibilities of social media companies, and the risks posed to young users.
The Scope of the Problem
Maldita’s investigation uncovered over 5,200 AI-generated videos across more than 20 TikTok accounts, all portraying young girls in sexualised clothing or suggestive positions. These videos include depictions of girls in school uniforms, bikinis, and other tight clothing, crafted in ways that sexualise minors. The accounts in question collectively have over 550,000 followers and have amassed nearly 6 million likes, illustrating both the reach of these videos and the demand for such content.
Perhaps most alarmingly, the comments on these videos often contain links to Telegram groups that sell child pornography. Maldita’s team identified 12 such groups and reported them to Spanish law enforcement, underscoring the direct connection between AI-generated content on mainstream platforms and illegal child exploitation networks.
Monetisation and Platform Responsibility
The financial dimension of the problem adds another layer of concern. Many of the accounts producing AI-generated content profit through TikTok’s subscription model, which allows followers to pay for access to exclusive content. Under TikTok’s system, creators receive roughly 50% of the revenue, while the platform keeps the other half. This structure incentivises creators to produce attention-grabbing, often controversial content, and in some cases, illegal material, to maintain a profitable subscriber base.
TikTok explicitly prohibits sexualised content involving minors, yet the sheer volume of AI-generated videos that have spread across the platform demonstrates a significant gap in enforcement. AI-generated content can be particularly difficult for automated moderation systems to detect because it does not always involve real children, technically skirting some definitions of illegal content, even as it clearly promotes exploitative imagery.
Global Policy Responses
The report comes at a time when regulators worldwide are scrutinising the safety of children on social media. Countries including Australia, Denmark, and members of the European Union have implemented or are considering stricter rules for users under 16, aiming to reduce exposure to harmful or sexualised content. Some of these measures include age verification requirements, restricted functionality for younger users, and obligations for platforms to remove content deemed harmful to minors.
Maldita’s findings provide a stark reminder that regulatory oversight is lagging behind technological developments, particularly as AI enables new forms of content creation that can bypass traditional safeguards.
AI Content and the Risk of Normalisation
One of the most insidious aspects of AI-generated sexualised videos is the potential for normalisation of abuse. Even though the depicted individuals are not real, the content still sexualises the appearance of minors and can encourage abusive behaviours in viewers. Experts warn that exposure to such content, especially when it is widely shared and monetised, can contribute to a culture in which the sexualisation of children is trivialised or accepted.
Furthermore, the integration of these videos into platforms like TikTok—which is widely popular among teenagers—raises the risk of peer exposure, potentially affecting vulnerable audiences and shaping harmful perceptions of sexuality at a formative age.
Challenges of Detecting and Removing AI Content
Moderating AI-generated content presents unique challenges for social media companies. Unlike traditional videos or photographs of real individuals, AI-generated images do not involve actual children, which can create legal grey areasthat complicate enforcement. Automated detection tools may struggle to differentiate between AI-generated imagery and permissible content, allowing exploitative material to persist longer than it should.
Experts call for more advanced AI moderation tools and human oversight to address these gaps. Social media companies must combine automated detection with expert review to ensure that harmful content is swiftly removed and that platforms are not inadvertently supporting illegal activities.
The Need for Coordinated Action
The TikTok report highlights the urgent need for multi-layered approaches to safeguard children online. This includes:
Strengthening platform moderation: Investing in better AI detection tools and human moderators trained to identify sexualised content involving minors.
Regulatory oversight: Governments must update legislation to address the unique risks of AI-generated content and hold platforms accountable for enforcement.
Parental education: Parents and guardians need guidance on digital safety, the risks of AI-generated content, and strategies to protect children online.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Global cooperation: Because content can cross borders instantly, international coordination is crucial for law enforcement to track and dismantle child exploitation networks.
TikTok, with its massive user base and global reach, sits at the center of this challenge. How it responds in the coming months will set an important precedent for AI content moderation and child protection across social media.
Conclusion
Maldita’s report exposes a chilling trend: AI technology, while revolutionary, can also be exploited to produce and distribute sexualised depictions of minors at scale. The combination of social media virality, monetisation incentives, and difficult-to-detect AI content creates a high-stakes challenge for platforms, regulators, and society at large.
Ensuring children’s safety in a digital age increasingly dominated by AI will require robust policies, sophisticated moderation tools, and active public engagement. TikTok’s response to these findings will not only influence its credibility but also serve as a benchmark for how other platforms manage the risks associated with AI-generated content.
Final Thought
AI has enormous potential to transform creativity and communication—but without careful oversight, it can also facilitate harm on an unprecedented scale. Protecting children online demands vigilance, innovation, and a commitment from both platforms and users to prevent the exploitation of minors and safeguard the next generation.
A report by Maldita reveals thousands of AI-generated sexualised videos featuring minors on TikTok, highlighting links to child exploitation networks and raising urgent concerns about online safety, moderation, and regulation.
Artificial intelligence (AI) has transformed the way content is created and shared online, enabling a wave of creativity and innovation. However, a recent report by the Spanish fact-checking organisation Maldita has highlighted a dark and deeply troubling misuse of this technology on TikTok. According to the report, thousands of AI-generated videos featuring sexualised depictions of minors are circulating on the platform, raising urgent questions about the effectiveness of moderation, the ethical responsibilities of social media companies, and the risks posed to young users.
The Scope of the Problem
Maldita’s investigation uncovered over 5,200 AI-generated videos across more than 20 TikTok accounts, all portraying young girls in sexualised clothing or suggestive positions. These videos include depictions of girls in school uniforms, bikinis, and other tight clothing, crafted in ways that sexualise minors. The accounts in question collectively have over 550,000 followers and have amassed nearly 6 million likes, illustrating both the reach of these videos and the demand for such content.
Perhaps most alarmingly, the comments on these videos often contain links to Telegram groups that sell child pornography. Maldita’s team identified 12 such groups and reported them to Spanish law enforcement, underscoring the direct connection between AI-generated content on mainstream platforms and illegal child exploitation networks.
Monetisation and Platform Responsibility
The financial dimension of the problem adds another layer of concern. Many of the accounts producing AI-generated content profit through TikTok’s subscription model, which allows followers to pay for access to exclusive content. Under TikTok’s system, creators receive roughly 50% of the revenue, while the platform keeps the other half. This structure incentivises creators to produce attention-grabbing, often controversial content, and in some cases, illegal material, to maintain a profitable subscriber base.
TikTok explicitly prohibits sexualised content involving minors, yet the sheer volume of AI-generated videos that have spread across the platform demonstrates a significant gap in enforcement. AI-generated content can be particularly difficult for automated moderation systems to detect because it does not always involve real children, technically skirting some definitions of illegal content, even as it clearly promotes exploitative imagery.
Global Policy Responses
The report comes at a time when regulators worldwide are scrutinising the safety of children on social media. Countries including Australia, Denmark, and members of the European Union have implemented or are considering stricter rules for users under 16, aiming to reduce exposure to harmful or sexualised content. Some of these measures include age verification requirements, restricted functionality for younger users, and obligations for platforms to remove content deemed harmful to minors.
Maldita’s findings provide a stark reminder that regulatory oversight is lagging behind technological developments, particularly as AI enables new forms of content creation that can bypass traditional safeguards.
AI Content and the Risk of Normalisation
One of the most insidious aspects of AI-generated sexualised videos is the potential for normalisation of abuse. Even though the depicted individuals are not real, the content still sexualises the appearance of minors and can encourage abusive behaviours in viewers. Experts warn that exposure to such content, especially when it is widely shared and monetised, can contribute to a culture in which the sexualisation of children is trivialised or accepted.
Furthermore, the integration of these videos into platforms like TikTok—which is widely popular among teenagers—raises the risk of peer exposure, potentially affecting vulnerable audiences and shaping harmful perceptions of sexuality at a formative age.
Challenges of Detecting and Removing AI Content
Moderating AI-generated content presents unique challenges for social media companies. Unlike traditional videos or photographs of real individuals, AI-generated images do not involve actual children, which can create legal grey areasthat complicate enforcement. Automated detection tools may struggle to differentiate between AI-generated imagery and permissible content, allowing exploitative material to persist longer than it should.
Experts call for more advanced AI moderation tools and human oversight to address these gaps. Social media companies must combine automated detection with expert review to ensure that harmful content is swiftly removed and that platforms are not inadvertently supporting illegal activities.
The Need for Coordinated Action
The TikTok report highlights the urgent need for multi-layered approaches to safeguard children online. This includes:
-
Strengthening platform moderation: Investing in better AI detection tools and human moderators trained to identify sexualised content involving minors.
-
Regulatory oversight: Governments must update legislation to address the unique risks of AI-generated content and hold platforms accountable for enforcement.
-
Parental education: Parents and guardians need guidance on digital safety, the risks of AI-generated content, and strategies to protect children online.
-
Global cooperation: Because content can cross borders instantly, international coordination is crucial for law enforcement to track and dismantle child exploitation networks.
TikTok, with its massive user base and global reach, sits at the center of this challenge. How it responds in the coming months will set an important precedent for AI content moderation and child protection across social media.
Conclusion
Maldita’s report exposes a chilling trend: AI technology, while revolutionary, can also be exploited to produce and distribute sexualised depictions of minors at scale. The combination of social media virality, monetisation incentives, and difficult-to-detect AI content creates a high-stakes challenge for platforms, regulators, and society at large.
Ensuring children’s safety in a digital age increasingly dominated by AI will require robust policies, sophisticated moderation tools, and active public engagement. TikTok’s response to these findings will not only influence its credibility but also serve as a benchmark for how other platforms manage the risks associated with AI-generated content.
Final Thought
AI has enormous potential to transform creativity and communication—but without careful oversight, it can also facilitate harm on an unprecedented scale. Protecting children online demands vigilance, innovation, and a commitment from both platforms and users to prevent the exploitation of minors and safeguard the next generation.
Uploaded files:Share this:
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on X (Opens in new window) X
- Click to share on WhatsApp (Opens in new window) WhatsApp
- Click to share on LinkedIn (Opens in new window) LinkedIn
- Click to email a link to a friend (Opens in new window) Email
- Click to print (Opens in new window) Print




