Britain’s media regulator Ofcom and privacy watchdog the Information Commissioner’s Office (ICO) issued stark warnings to major social media platforms on Thursday, demanding urgent improvements to age verification and child safety measures.
The regulators accused Facebook, Instagram (Meta), TikTok (ByteDance), YouTube (Alphabet), Snapchat, and Roblox of failing to enforce their own minimum age rules, exposing children to harmful or addictive content through algorithmic feeds.
“These online services are household names, but they’re failing to put children’s safety at the heart of their products. That must now change quickly, or Ofcom will act,” Ofcom CEO Melanie Dawes said.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
ICO CEO Paul Arnold added: “There’s now modern technology at your fingertips, so there is no excuse,” referring to advanced age-assurance tools that could reliably block under-13s from services not designed for them.
The demands come under the latest phase of implementation of the Online Safety Act, which gives Ofcom sweeping enforcement powers. Platforms have until April 30, 2026, to demonstrate how they will:
- Strengthen age checks and verification processes.
- Restrict stranger contact with children.
- Make algorithmic feeds safer for minors.
- Stop testing new products or features on children.
The ICO issued a parallel open letter calling for adoption of “modern, viable” age-assurance technologies, ranging from AI-based age estimation to device-level checks, to prevent under-13 access. Both regulators emphasized that current methods (self-declaration, weak age gates) are inadequate and that platforms must move beyond minimal compliance.
Platform Responses
Meta stated it already employs AI-based age detection and age-estimation tools, places teens in accounts with built-in protections (e.g., private by default, restricted messaging), and advocates for centralized age verification at the app-store level to avoid repeated data requests.
A spokesperson said: “Age should be verified centrally at the app store level so families do not have to provide personal information multiple times.”
YouTube expressed surprise at Ofcom’s approach, urging the regulator to focus on “high-risk services” rather than a blanket demand. The platform highlighted age-appropriate experiences and said it was “surprised to see Ofcom move away from a risk-based approach.”
Roblox noted it had launched more than 140 new safety features in the past year, including mandatory age checks for chat functions to prevent adult-child communication.
“While no system is ever perfect, we continue to strengthen protections designed to keep players safe,” a spokesperson said.
Enforcement Powers and Precedent
Ofcom can impose fines of up to 10% of qualifying global revenue for non-compliance with the Online Safety Act. The ICO can levy penalties of up to 4% of global annual turnover under data protection law. The ICO last month fined Reddit £14.5 million ($18 million) for failing to implement meaningful age checks and unlawfully processing children’s data — a clear warning to platforms that regulators are willing to use their full authority.
The regulators’ actions align with growing political pressure to protect children online. Britain has been considering legislation to bar under-16s from social media platforms entirely, mirroring Australia’s recent approach. The Online Safety Act already requires platforms to conduct risk assessments for child safety and implement proportionate measures, but enforcement has been gradual, with the current phase focusing on age assurance and feed safety.
The demands indicate mounting concern over algorithmic feeds that prioritize engagement over safety, exposing children to harmful content (violence, self-harm, eating disorders, grooming). Ofcom’s research shows children as young as 8 regularly encounter such material, with many platforms failing to act swiftly on reports or proactively filter feeds.
The timing coincides with heightened global scrutiny of tech firms’ responsibility toward minors. The EU’s Digital Services Act and upcoming AI Act impose similar obligations, while U.S. states have passed or proposed age-verification and parental-consent laws. Britain’s regulators are moving faster than most, leveraging the Online Safety Act’s broad powers to demand systemic changes rather than incremental fixes.
The April 30 deadline sets up a high-stakes compliance test. Platforms face a choice: invest heavily in robust age-assurance technologies (facial estimation, behavioral analysis, device-level checks) or risk substantial fines and reputational damage. The ICO’s £14.5 million Reddit penalty demonstrates that enforcement is not theoretical.
This means, Meta, TikTok, YouTube, Snapchat, and Roblox — all household names with massive child user bases — need to up the ante. Failure to act could trigger the most significant enforcement actions yet under the Online Safety Act, with potential fines in the billions and forced product changes.



