The European Commission has launched a new formal investigation into X, escalating regulatory pressure on the Elon Musk-owned platform over the deployment of its artificial intelligence tool Grok and the risks it poses to users in the European Union.
In a statement, the Commission said the new probe will examine whether X properly assessed and mitigated the risks linked to the rollout of Grok’s functionalities on its platform in the EU. Regulators are particularly concerned about the dissemination of illegal content, including manipulated sexually explicit images, some of which could amount to child sexual abuse material.
“These risks seem to have materialized, exposing citizens in the EU to serious harm,” the Commission said, signaling that its concerns are no longer theoretical but grounded in observed outcomes on the platform.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
As part of the new proceedings, the Commission will assess whether X has complied with its obligations under the Digital Services Act to diligently identify and mitigate systemic risks. These include the spread of illegal content, negative effects linked to gender-based violence, and serious consequences for users’ physical and mental well-being arising from the deployment of Grok’s features.
The investigation will also examine whether X conducted and submitted an ad hoc risk assessment report to the Commission before deploying Grok functionalities that significantly altered the platform’s overall risk profile, as required under EU law.
In parallel, the Commission said it has expanded its existing investigation, opened in December 2023, into X’s compliance with the DSA’s rules governing recommender systems. That extension will focus on whether X properly assessed and mitigated systemic risks associated with its recommendation algorithms, including the impact of its recently announced switch to a Grok-based recommender system.
If the Commission’s findings confirm its preliminary concerns, X could be found in breach of several provisions of the DSA, including Articles 34 and 35, which set out obligations for very large online platforms to assess and reduce systemic risks, and Article 42, which governs reporting and oversight requirements.
The Commission stressed that the opening of formal proceedings does not prejudge the final outcome, but said the investigation will be treated as a priority.
The probe is being conducted in close coordination with Coimisiún na Meán, Ireland’s media regulator, which serves as the Digital Services Coordinator for X as the platform’s EU country of establishment. Under the DSA, the Irish authority will be formally associated with the investigation, reflecting the EU’s cross-border enforcement framework.
As part of its next steps, the Commission said it will continue gathering evidence, including through additional requests for information, interviews, and inspections. It also signaled that interim measures could be imposed if X fails to make meaningful adjustments to its service during the investigation.
The formal proceedings give the Commission broad enforcement powers. These include the ability to issue a non-compliance decision and impose further fines, or to accept commitments offered by X to address the concerns identified. Once proceedings are opened, national regulators in EU member states are relieved of their own enforcement powers in relation to the suspected infringements, centralizing oversight at the EU level.
Grok, developed by X’s AI arm, has been integrated into the platform in multiple ways since 2024, enabling users to generate text and images and to receive contextual information related to posts. As a designated very large online platform under the DSA, X is subject to the bloc’s strictest obligations, including the duty to assess and mitigate risks related to illegal content and threats to fundamental rights, particularly those affecting minors.
The new investigation builds on a wider enforcement action that has already seen X penalized under the DSA. On 5 December 2025, the Commission adopted a non-compliance decision against the platform, imposing a €120 million fine over deceptive design practices, lack of advertising transparency, and insufficient data access for researchers. The broader December 2023 proceedings also cover X’s notice-and-action mechanisms and its handling of illegal content, including terrorist material.
In September, the Commission sent X a detailed request for information related to Grok, including questions about antisemitic content generated by the Grok account in mid-2025, highlighting growing regulatory unease about the AI tool’s outputs.
European officials have framed the latest move as part of a broader effort to ensure that AI deployment does not come at the expense of fundamental rights.
“Sexual deepfakes of women and children are a violent, unacceptable form of degradation,” said Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security, and democracy. “With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated the rights of European citizens — including those of women and children — as collateral damage of its service.”
Under the DSA, individuals who believe they have been harmed by AI-generated content, including non-consensual intimate images or child sexual abuse material, have the right to lodge complaints with their national Digital Services Coordinator. Support services are also available at the national level for victims of such content.
The case is shaping up as one of the EU’s most significant tests yet of how far the bloc is willing to go in enforcing its landmark digital rulebook against AI-driven features on major platforms, and it could set a precedent for how generative AI systems are governed across Europe.



