YouTube CEO Neal Mohan has signaled that 2026 will be a pivotal year for the world’s largest video platform, as it confronts the growing flood of low-quality AI-generated content and the more dangerous rise of deepfakes that blur the line between reality and fabrication.
In his annual letter published Wednesday, Mohan said the platform is prioritizing efforts to reduce what has come to be known as “AI slop” while strengthening systems to detect and remove deepfakes, describing the challenge as increasingly urgent as artificial intelligence becomes embedded across the internet.
“It’s becoming harder to detect what’s real and what’s AI-generated,” Mohan wrote. “This is particularly critical when it comes to deepfakes.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab (class begins Jan 24 2026).
The comments underline a broader tension facing YouTube and other major social media platforms: AI is simultaneously supercharging creativity and scale, while threatening trust, quality, and authenticity at an unprecedented level. As a Google-owned company that sits at the center of user-generated content, YouTube is now one of the primary battlegrounds for how the internet manages that trade-off.
Google has poured billions of dollars into AI infrastructure, expanding data centers, developing its Gemini models, and weaving AI tools into consumer and enterprise products. On YouTube, those investments are already reshaping how videos are created, edited, and distributed. More than 1 million channels used YouTube’s AI creation tools daily on average in December, Mohan said, a figure that illustrates just how rapidly synthetic content has entered the mainstream.
That surge has come with consequences. AI slop, a term used to describe large volumes of low-effort, repetitive, or low-quality AI-generated videos, has become a growing problem across platforms that rely on algorithmic recommendations. YouTube, Meta, and TikTok all use AI-driven systems designed to maximize engagement, which can unintentionally amplify such content if it proves clickable, even when it adds little value for viewers.
Mohan said YouTube is treating this as an inflection point, one where “the lines between creativity and technology are blurring.” The company plans to build on systems it has long used to fight spam, clickbait, and repetitive content, adapting them to the new reality of AI-generated media.
Rather than banning AI-generated videos outright, YouTube’s strategy is focused on transparency and enforcement. Mohan said the platform clearly labels videos created using AI tools and requires creators to disclose when content has been altered or synthetically generated. Videos deemed to be “harmful synthetic media” are removed when they violate YouTube’s guidelines, particularly in cases involving deception, impersonation, or abuse.
Deepfakes present a sharper reputational and legal risk. Unlike low-quality AI content, deepfakes can undermine trust, spread misinformation, and exploit individuals by using their likeness without consent. In December, YouTube announced it would expand its likeness detection technology, which flags videos where a creator’s face has been used without permission. That feature is now being rolled out to millions of creators in the YouTube Partner Program.
The emphasis on deepfake detection reflects pressure not just from users, but from advertisers, regulators, and public figures who are increasingly wary of their images being misused. Keeping the platform attractive to all three groups, users, creators, and advertisers, remains central to YouTube’s business model, particularly as scrutiny of online harms intensifies globally.
At the same time, Mohan was careful to position AI as an enabler rather than a replacement for human creativity.
“We’ll use AI as a tool, not a replacement,” he wrote, a framing that mirrors Google’s broader messaging as it seeks to reassure creators that automation will not hollow out their livelihoods.
YouTube is, in fact, expanding the ways creators can use AI, especially on Shorts, its short-form video product that competes directly with TikTok and Instagram Reels. Mohan said creators will soon be able to generate Shorts using their own likeness, produce games from simple text prompts, and experiment with music creation, tools that lower the barrier to entry while potentially accelerating content production.
This dual approach, cracking down on abuse while widening access to AI tools, highlights the tightrope YouTube is walking. Too heavy a hand risks alienating creators who are driving growth. Too light a touch risks flooding the platform with content that erodes user trust and advertiser confidence.
Mohan framed creators as central to YouTube’s future, calling them “the new stars and studios.” He pointed to creators buying studio-sized lots in Hollywood and elsewhere, producing high-budget content that increasingly resembles traditional television. To support that evolution, YouTube is pushing new monetization options, from shopping integrations and brand partnerships to fan-funding features such as Jewels and gifts.
Another area of focus is younger audiences. Mohan said making YouTube “the best place for kids and teens” is a priority, with new tools planned to simplify the creation and management of children’s accounts and allow parents to switch between profiles more easily. That push comes as regulators and parents alike demand stronger safeguards for minors online, particularly as AI-generated content becomes more pervasive.
Financially, YouTube’s scale gives it both leverage and exposure. The company said in September that it has paid out more than $100 billion to creators, artists, and media companies since 2021, underscoring its role as one of the largest engines of the creator economy. Analysts at MoffettNathanson estimated earlier this year that if YouTube were a standalone business, it would be worth between $475 billion and $550 billion.
Those numbers help explain why the fight against AI slop and deepfakes matters so much. YouTube’s valuation, influence, and long-term growth all depend on whether it can maintain trust while embracing the next wave of AI-driven creativity. Mohan’s letter makes clear that 2026 will be less about whether AI belongs on YouTube and more about whether the platform can impose enough order on synthetic media to keep its ecosystem credible, competitive, and commercially viable.



