Spain is moving aggressively to tighten regulation of artificial intelligence and social media platforms, positioning itself at the forefront of Europe’s widening confrontation with major technology companies over online safety, algorithmic transparency, and the societal impact of digital platforms.
Speaking to Reuters, Spain’s digital transformation minister, Oscar Lopez, said Madrid would press ahead with stricter rules governing AI systems and social media companies despite what he described as intense lobbying from the tech industry.
“The profit of four tech companies cannot come at the expense of the rights of millions,” Lopez said, arguing that some of the world’s largest technology firms were resisting regulations designed to force disclosure of how algorithms shape online behavior and limit the deployment of high-risk AI systems.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
His remarks follow a broader shift across Europe, where governments are increasingly treating social media and generative AI not merely as innovation sectors but as areas requiring public-health, child-protection, and democratic safeguards.
The comments also align Spain closely with the tougher regulatory posture emerging from the European Commission under the leadership of Ursula von der Leyen. Von der Leyen said this week that Brussels intends to target addictive and harmful design features used by social media firms through the forthcoming Digital Fairness Act, a major legislative initiative expected to expand Europe’s digital regulatory architecture.
The proposed framework comes as European policymakers grow increasingly concerned that recommendation algorithms optimized for engagement are amplifying misinformation, self-harm content, cyberbullying, and extremist material, particularly among minors.
Spain has emerged as one of the bloc’s most assertive voices on the issue. Earlier this year, the government announced plans to ban social media use by teenagers, with legislation already advancing through parliament. Authorities are also pursuing measures that would hold platform executives personally liable for hate speech hosted on their services, marking one of the most aggressive accountability proposals introduced by a European government.
The initiatives triggered backlash from some figures in the technology industry, including Elon Musk, owner of X, formerly known as Twitter. Musk accused Socialist Prime Minister Pedro Sanchez of authoritarianism following the proposals, deepening an already tense relationship between European regulators and major Silicon Valley companies.
The clash underpins the widening ideological divide between Europe’s precautionary regulatory model and the more permissive approach often favored by segments of the U.S. technology industry.
Lopez said Spain favors a coordinated European framework because enforcement becomes significantly more effective across a market of more than 400 million consumers than through fragmented national regulations. He warned that supporters of a laissez-faire approach to AI and social media governance would ultimately regret defending what he described as “the law of the jungle.”
Spain is not alone on this. Governments globally move toward stricter oversight of digital platforms. Australia has intensified efforts to regulate harmful online content targeting children, while France and Greece have also advocated tougher controls on platform design and age verification systems.
The debate has been based on mounting concern about the psychological and social effects of digital platforms on younger users. Lopez linked Spain’s push directly to rising cases of cyberbullying, online sexual harassment, and AI-generated sexual deepfakes involving minors, especially girls.
He described the situation as a mental health pandemic, reflecting growing alarm among policymakers and health experts over evidence connecting excessive social media exposure with anxiety, depression, addiction-like behavior, and declining attention spans among adolescents.
The emergence of generative AI has further intensified those concerns. European officials fear that synthetic media tools capable of producing realistic fake images, videos, and voices could accelerate abuse, misinformation, and political manipulation if left largely unregulated.
Spain has therefore positioned itself as a leading advocate for what Lopez called “trustworthy AI,” an approach that prioritizes privacy protections, democratic safeguards, child safety, and accountability mechanisms over rapid commercial deployment. The language mirrors broader European Union efforts to establish global standards for AI governance through the bloc’s AI Act, which classifies certain technologies according to risk and imposes stricter obligations on systems deemed potentially harmful.
European officials increasingly argue that regulation could become a competitive advantage rather than a barrier to innovation, allowing the region to differentiate itself from both the United States’ market-driven model and China’s state-centric digital ecosystem. Lopez also signaled support for stronger accountability around online anonymity. Asked whether authorities should be able to identify individuals using pseudonyms online if they commit crimes, he said anonymity should not shield offenders from legal responsibility.
“What isn’t legal in the real world cannot be legal in the virtual world. Full stop,” he said.
That position is likely to fuel further debate among privacy advocates and civil-liberties groups, many of whom warn that weakening anonymity protections could expose journalists, activists, and dissidents to surveillance or retaliation.
Still, the direction of travel across Europe appears increasingly clear. Policymakers who once focused primarily on market competition and data privacy are now framing digital regulation as a matter of national security, democratic resilience, and public health.



