Home Community Insights YouTube Requires Creators to Disclose if Content is AI generated, or Face Removal from Earning Revenue

YouTube Requires Creators to Disclose if Content is AI generated, or Face Removal from Earning Revenue

YouTube Requires Creators to Disclose if Content is AI generated, or Face Removal from Earning Revenue
A picture shows a You Tube logo on December 4, 2012 during LeWeb Paris 2012 in Saint-Denis near Paris. Le Web is Europe's largest tech conference, bringing together the entrepreneurs, leaders and influencers who shape the future of the internet. AFP PHOTO ERIC PIERMONT (Photo credit should read ERIC PIERMONT/AFP/Getty Images)

YouTube has announced a new policy that will affect creators who use artificial intelligence (AI) to generate content for their videos. According to the policy, creators must disclose in the video description if any part of their content is AI generated, such as voice, face, or text. Failure to do so may result in the removal of the video or suspension from earning revenue.

The policy aims to prevent misinformation, deception, and manipulation that may arise from the use of AI technologies, especially deepfakes, which are realistic but fake videos created by swapping faces or altering voices. YouTube says that it respects the creative and innovative potential of AI, but also wants to protect its users and community from harmful or misleading content.

The policy will apply to all videos uploaded after November 30, 2023. Creators who have existing videos that use AI must update their descriptions by December 31, 2023, or risk losing their monetization privileges. YouTube will also provide tools and resources to help creators identify and disclose AI-generated content, as well as educate them on the ethical and legal implications of using such technologies.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

How will YouTube know content is AI generated?

Artificial intelligence is becoming more and more prevalent in various domains, including content creation. AI can generate text, images, videos, music, and more, with varying degrees of quality and originality. Some of these outputs are indistinguishable from human-made ones, while others are easily recognizable as synthetic.

YouTube, as one of the largest platforms for content sharing and consumption, faces a challenge in dealing with AI-generated content. How will YouTube know if a video is created by a human or an AI? And why does it matter?

There are several reasons why YouTube might want to identify and label AI-generated content. One is to protect the intellectual property rights of the original creators, who might not want their work to be copied or modified by AI without their consent. Another is to prevent the spread of misinformation or harmful content that might be generated by malicious actors using AI. A third is to maintain the trust and authenticity of the YouTube community, which values human creativity and expression.

YouTube has not yet announced any official policy or guidelines for AI-generated content, but it is likely that they are working on developing some. One possible way that YouTube could detect AI-generated content is by using AI itself.

YouTube could employ machine learning models that are trained to recognize the patterns and features of synthetic content, such as unnatural transitions, artifacts, inconsistencies, or anomalies. These models could then flag or label the videos that are suspected to be AI-generated and alert the human reviewers for further verification.

Another possible way that YouTube could detect AI-generated content is by relying on the users themselves. YouTube could encourage the users to report or flag any videos that they think are AI-generated and provide them with some criteria or indicators to look for.

YouTube could also ask the users to provide some evidence or proof of their own identity and authorship when they upload a video, such as a selfie, a voice recording, or a watermark. This could help YouTube verify the source and legitimacy of the content.

AI-generated content is not inherently bad or good, but it poses some challenges and opportunities for YouTube and its users. YouTube will need to balance the benefits and risks of allowing or restricting AI-generated content on its platform and ensure that it respects the rights and interests of all parties involved. As AI becomes more advanced and accessible, YouTube will also need to adapt and evolve its policies and practices accordingly.

YouTube’s new policy is part of its broader efforts to combat misinformation and promote transparency on its platform. The company has previously implemented policies to label and remove videos that contain false or misleading information about topics such as elections, vaccines, or COVID-19. It has also partnered with fact-checkers and experts to provide authoritative sources and context for users.

YouTube hopes that its new policy will encourage creators to be more responsible and honest about their use of AI, and also foster a more informed and engaged audience. The company says that it will continue to monitor the development and impact of AI technologies, and update its policies as needed.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here