California Attorney General Rob Bonta said his office is building a dedicated artificial intelligence accountability program as it investigates Elon Musk’s AI company, xAI, over the alleged generation of non-consensual sexually explicit images.
In an interview on Tuesday, Bonta confirmed that his office sent a cease-and-desist letter to xAI last month amid regulatory scrutiny over sexualized content produced by its chatbot, Grok. Authorities are seeking assurances that the conduct has stopped and are continuing discussions with the company.
“Just because you stop going forward doesn’t mean you get a pass on what you did,” Bonta said, signaling that potential enforcement action may not hinge solely on corrective steps taken after the fact.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The investigation centers on Grok’s alleged generation of sexualized images of adults and potentially minors without consent. Regulators globally have examined whether AI tools are facilitating the creation of synthetic explicit content that may violate privacy, harassment, or child protection laws.
In January, xAI said it implemented safeguards to reject requests for sexualized images of real individuals and to block such image generation in jurisdictions where it is illegal. The company also said it modified outputs — for example, altering requested explicit depictions into less revealing images.
Bonta, however, said xAI had deflected responsibility and that some sexualized content generation remains accessible to paying subscribers. His office is seeking confirmation that problematic conduct has ceased entirely.
The probe points to a growing regulatory focus on generative AI systems that can create realistic imagery and conversational content at scale — capabilities that raise complex questions around consent, intellectual property, and platform liability.
California Positions Itself as an AI Enforcer
California’s move is believed to underscore its intention to assert state-level authority in AI governance, even as federal lawmakers debate national standards.
Bonta said his office is “beefing up” internal expertise through an “AI oversight, accountability and regulation program.” The initiative is designed to build technical capacity within the attorney general’s office to investigate AI systems and enforce consumer protection, civil rights, and child safety laws.
He warned against granting Congress exclusive regulatory authority over AI, citing prior legislative gridlock on data protection and digital privacy.
California has historically played an outsized role in technology regulation — from privacy laws such as the California Consumer Privacy Act (CCPA) to enforcement actions involving major tech firms. With many AI companies headquartered in the state, local authorities have both jurisdictional reach and political incentive to act.
Bonta said AI chatbots that engage in sexually explicit conversations with minors or provide instructions for self-harm are unacceptable, framing the issue as part of a wider consumer protection and child safety challenge.
The scrutiny of xAI follows heightened awareness of generative AI misuse, including the creation of deepfake pornography and harmful conversational outputs. Law enforcement agencies and advocacy groups have warned that synthetic media tools can amplify harassment, blackmail, and exploitation.
State authorities have also notified OpenAI that California maintains an “ongoing interest” in its safety practices, particularly following the attorney general’s office involvement in overseeing aspects of the company’s corporate restructuring last year.
Legislative Backdrop
California lawmakers are considering a bill that would formally require the attorney general’s office to establish a program dedicated to building AI expertise and regulatory capacity. If passed, it would institutionalize oversight mechanisms at a time when AI capabilities are rapidly evolving.
In a joint interview, William Tong, Connecticut’s attorney general, described AI-related harm as “the consumer protection fight of our time,” comparing its potential societal impact to or exceeding past public health and consumer crises.
“This affects all of our children,” Tong said.
Industry Pushback and Federal-State Tensions
The investigation also surfaces tension between state regulators and industry advocates who argue that a patchwork of state rules could stifle innovation and create compliance burdens.
Some Republican lawmakers have called for federal preemption — a single national framework governing AI — to prevent divergent state-level enforcement.
Bonta’s stance suggests California is unwilling to wait for federal consensus. His office’s actions indicate a view that existing consumer protection and civil rights statutes already provide authority to pursue AI-related misconduct.
The outcome of California’s probe into xAI could signal how aggressively state authorities plan to police generative AI systems. Potential consequences range from negotiated compliance agreements and fines to broader litigation under consumer protection or child safety statutes.
As AI tools become more capable of producing realistic synthetic content, regulators are confronting the limits of voluntary safeguards. California’s emerging AI accountability program suggests that oversight is shifting from advisory guidance to enforcement-backed scrutiny.
The case against xAI may become an early test of how far state attorneys general can go in holding AI developers responsible for harmful outputs.



