Home Latest Insights | News TikTok Tightens Age Detection Across Europe as Australia’s Under-16 Social Media Ban Raises the Stakes

TikTok Tightens Age Detection Across Europe as Australia’s Under-16 Social Media Ban Raises the Stakes

TikTok Tightens Age Detection Across Europe as Australia’s Under-16 Social Media Ban Raises the Stakes

TikTok has made a decision to roll out a new age-detection system across Europe, reflecting a recalibration by the platform as governments move from warning shots to hard regulation on children’s access to social media, Reuters reports.

The move follows Australia’s enactment of a landmark law restricting children under 16 from using social media, a policy shift that has already forced platforms to shut out large numbers of young users and sharpened pressure on regulators elsewhere to act.

The ByteDance-owned company said the new system, which will be deployed in the coming weeks, is designed to better identify accounts belonging to children under 13, TikTok’s global minimum age. Unlike traditional age gates that rely on self-declared birthdays, the technology analyzes a mix of profile information, posted videos, and behavioral signals to assess whether an account is likely underage.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab (class begins Jan 24 2026).

Tekedia unveils Nigerian Capital Market Masterclass.

Accounts flagged by the system will not be automatically removed but sent to specialist moderators for review, a step TikTok says is meant to reduce errors while complying with Europe’s strict data-protection rules.

The rollout builds on a year-long pilot in Britain that resulted in the removal of thousands of additional underage accounts. The UK trial offered TikTok both evidence that the approach could work and a template that could be adapted for the rest of Europe, where regulators have long complained that platforms are failing to enforce their own age limits.

Australia’s under-16 social media law, the most far-reaching of its kind globally, has altered the regulatory conversation. By placing the burden squarely on platforms to prevent children from accessing social networks, the law has effectively cut off millions of minors from mainstream social media services and demonstrated that governments are willing to accept disruption in the name of child safety. European policymakers have taken note. The European Parliament is pushing for clearer age limits, while countries such as Denmark are openly discussing bans for children under 15.

Against that backdrop, TikTok’s Europe-focused system looks less like a voluntary upgrade and more like a pre-emptive response to the risk of tougher mandates. European authorities are increasingly skeptical of approaches they view as either ineffective or overly invasive, and age verification sits at the center of that tension. Demanding identity documents from all users raises privacy concerns under the General Data Protection Regulation, yet self-reporting has proved easy to circumvent.

TikTok says its solution is designed to thread that needle. The company argues there is no globally agreed method for confirming age while preserving privacy, and that inference-based systems offer a middle ground. For users who appeal a ban, TikTok will still rely on more explicit checks, including facial age estimation from verification provider Yoti, credit card verification, and government-issued identification. Meta also uses Yoti to verify ages on Facebook, pointing to a broader industry shift toward layered verification rather than a single gatekeeping tool.

The regulatory pressure is not abstract. On Friday, a judge in Delaware was scheduled to hear TikTok’s bid to dismiss a lawsuit filed by the parents of five British children who died while allegedly attempting online challenges. The lawsuit alleges that TikTok’s algorithms promoted dangerous content to children, including the so-called “blackout challenge.”

TikTok has expressed sympathy for the families and said it strictly prohibits content that encourages harmful behavior, but the case underscores the legal exposure platforms face when underage users slip through enforcement gaps.

TikTok has emphasized that the new technology was built specifically for Europe and developed in consultation with Ireland’s Data Protection Commission, its lead privacy regulator in the EU. European users will be notified as the system goes live, the company said, a nod to transparency requirements under EU law.

However, it is not clear whether the approach satisfies regulators. Inference-based systems risk false positives that could lock out legitimate users, as well as false negatives that allow underage accounts to persist. There are also unresolved questions about how transparent platforms will be about the signals they use and how consistently human moderators apply standards at scale.

What is clear is that the regulatory climate has shifted. Australia’s under-16 ban has shown that governments are willing to impose blunt restrictions if they believe platforms are not acting fast enough. TikTok’s European rollout suggests the company is betting that smarter detection, rather than outright bans, can convince regulators that tougher measures are unnecessary. The coming months will test whether that bet holds as Europe weighs its next steps on child safety online.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here