TikTok has put hundreds of UK content moderators’ jobs at risk, even as tighter rules come into effect to stop the spread of harmful material online.
The viral video app said several hundred jobs in its trust and safety team could be affected in the UK, as well as South and South-East Asia, as part of a global reorganization. Their work will be reallocated to other European offices and third-party providers, with some trust and safety jobs remaining in the UK, the company said.
It is part of a wider move at TikTok to rely more heavily on artificial intelligence for moderation. The company said that more than 85% of the content removed for violating its community guidelines is already identified and taken down by automation.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The timing of the cuts has heightened scrutiny, as they come despite the recent introduction of new UK online safety rules, which require companies to introduce age checks on users attempting to view potentially harmful content. Under the Online Safety Act, companies can be fined up to £18 million or 10% of their global turnover for breaches, whichever is greater.
John Chadfield of the Communication Workers Union warned that replacing workers with AI in content moderation could put the safety of millions of TikTok users at risk.
“TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favor of hastily developed, immature AI alternatives,” he said.
TikTok, which Chinese tech group ByteDance owns, employs more than 2,500 staff in the UK. But over the past year, the company has been systematically cutting trust and safety staff across the world, often substituting human workers with automated systems. In September, it fired its entire team of 300 content moderators in the Netherlands. A month later, it announced plans to replace about 500 content moderation employees in Malaysia as part of its AI shift. Last week, TikTok workers in Germany held strikes over layoffs in its trust and safety team.
Despite the turmoil behind the scenes, business at TikTok is booming. Accounts filed to Companies House this week, covering its UK and European operations, showed revenues rose 38% to $6.3 billion (£4.7bn) in 2024 compared with the year prior. Its operating loss narrowed sharply from $1.4 billion in 2023 to $485 million, signaling a path to profitability.
A TikTok spokesperson defended the restructuring, saying: “We are continuing a reorganization that we started last year to strengthen our global operating model for trust and safety, which includes concentrating our operations in fewer locations globally to ensure that we maximize effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements.”
Comparisons with Other Platforms
TikTok’s pivot toward AI mirrors a wider trend across the tech industry, where rival platforms such as Meta, X (formerly Twitter), and YouTube have leaned heavily on automated systems to flag and remove harmful material.
Meta, which runs Facebook and Instagram, has long promoted its reliance on machine learning to detect hate speech and misinformation. Yet the company has faced repeated public backlash when violent livestreams, extremist propaganda, or harmful conspiracy content slipped through its filters. After the Christchurch mosque attack in 2019, Facebook came under fire when automated moderation failed to stop the livestream from spreading widely.
YouTube has also faced criticism for similar failures. While it boasts that the majority of harmful content is taken down by automation, researchers and advocacy groups have documented cases where extremist videos, disinformation, and child exploitation material circulated on the platform for months before being removed.
X, under Elon Musk’s leadership, has slashed human moderation teams while leaning more heavily on automation and community flagging through the “Community Notes” system. The move has attracted criticism from regulators in Europe, who say hate speech and harmful disinformation have surged since Musk’s cuts. The EU has already issued warnings under its new Digital Services Act, which carries steep penalties for non-compliance.
Experts warn that TikTok could face similar fallout. While AI can process billions of pieces of content far faster than human moderators, it struggles with nuance, context, and cultural sensitivity — areas where human judgment remains crucial.
The UK’s Online Safety Act places new and significant responsibility on tech companies to keep users safe, particularly children. Fines under the law can run into billions of pounds for a platform of TikTok’s size. TikTok risks sending a message that efficiency and cost-cutting outweigh safety concerns by cutting moderation staff at the very moment regulators are tightening oversight.



