Elon Musk’s AI chatbot Grok, part of his xAI ecosystem, has come under intense scrutiny after users leveraged its image-generation capabilities to create nonconsensual sexualized images of real people, including minors.
The controversy has sparked responses from governments, human rights advocates, and legal experts worldwide, underscoring the persistent challenges of regulating AI tools with powerful media manipulation capabilities.
The Misuse and Its Scope
Over the past week, reports and screenshots shared on Musk’s social media platform X reveal that users prompted Grok to digitally alter images of adults and children, “removing clothes,” repositioning bodies, or generating sexualized depictions. While some prompts involved consenting adults, such as OnlyFans creators experimenting with their own images, numerous cases involved nonconsenting subjects. Some altered images included minors, intensifying the legal and ethical stakes.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Grok’s AI moderation policies, outlined in XAI’s “Acceptable Use” guidelines, explicitly prohibit the sexualization of individuals and the depiction of minors in pornographic contexts. Yet, the recent surge in nonconsensual prompts shows that safeguards have been inadequate. In response, Grok acknowledged lapses in its protections, stating it is working to improve them.
Regulatory Scrutiny Across Borders
The backlash has not remained confined to social media outrage. French authorities confirmed that prosecutors are investigating reports of AI-generated deepfakes from Grok, noting that distributing sexually explicit material of minors is punishable by up to two years in prison. The European Union’s Digital Services Act also provides a framework under which online platforms can be held accountable for illegal content.
In India, the Ministry of Electronics and Information Technology (MeitY) demanded a “comprehensive technical, procedural, and governance-level review” of Grok’s operations, citing cases of obscene images being generated and circulated of women. Officials instructed X to submit an action report within three days and remove content violating Indian law.
Meanwhile, in the United Kingdom, Alex Davies-Jones, Minister for Victims & Violence Against Women and Girls, publicly urged Musk to act, highlighting Grok’s potential for mass exploitation. She emphasized that the AI’s ability to generate hundreds of sexualized images per minute poses unprecedented risks, particularly to women and girls. The UK is considering legislation that would criminalize the creation and dissemination of sexually explicit deepfakes, potentially adding legal pressure on platforms like X.
Grok’s Response and Corporate Accountability
In public statements on X, Grok admitted that “isolated cases” had produced images of minors in minimal clothing and reassured users that CSAM (Child Sexual Abuse Material) is illegal and prohibited. The company stressed ongoing improvements to block such requests, although the specifics of these safeguards remain unclear. In one reply, Grok claimed that advanced filtering and monitoring could prevent most cases, but conceded that “no system is 100% foolproof.”
Legal experts question whether platform operators like xAI can rely on traditional liability protections. Under U.S. law, Section 230 of the Communications Decency Act shields platforms from responsibility for user-generated content. Yet, some attorneys, including technology-facilitated abuse specialist Allison Mahoney, suggest that AI tools capable of generating content may blur the line between platform and creator.
“There needs to be clear legal avenues to hold platforms accountable,” Mahoney told Business Insider, denoting the unresolved tension between innovation and responsibility.
The Role of AI NSFW Features in Escalating Risks
Grok’s challenges are partly rooted in features Musk has openly promoted. In August 2025, the platform introduced a “spicy” mode for AI-generated adult content. Although not enabled for uploaded photos, the feature signaled an appetite for sexually explicit content, leaving a regulatory gray area that some users exploited.
The problem gained momentum after Wired reported in late December 2025 that other AI models, including OpenAI’s ChatGPT and Google’s Gemini, were being used to produce sexualized images from clothed photos. Grok appears to have followed this trend, demonstrating the ease with which AI can be weaponized for sexualized deepfakes.
Global Legal and Ethical Implications
The Grok controversy underscores how AI technologies are outpacing existing legal frameworks. In the United States, the Take It Down Act provides some protection against nonconsensual deepfakes, particularly for minors. Yet for adults, the law only covers deepfakes depicting sexual activity or genitalia, leaving broader exploitation scenarios in a gray zone.
Internationally, governments are grappling with similar challenges. France and India have acted swiftly, while the UK is considering criminal legislation specifically targeting AI-generated sexual content. EU regulators are monitoring compliance with the Digital Services Act, which requires online platforms to mitigate illegal content proactively.
Experts warn that without robust regulation, AI-generated deepfakes could expand into widespread harassment, revenge porn, or even child exploitation. Platforms like Grok, with high-speed image-generation capabilities, create unprecedented volumes of content, making moderation extremely difficult.
The immediate priority for Musk and xAI is technical fixes—improving filters, monitoring, and enforcement—but the broader challenge involves reconciling innovation with ethical and legal responsibilities.
Against this backdrop, companies like xAI will likely face increasing pressure to implement demonstrable safeguards and adopt proactive governance measures as governments intensify oversight. Musk’s public statements and past promotion of NSFW AI features now collide with growing demands for accountability, forcing a reckoning over whether AI platforms can operate responsibly without stricter regulation.



