Generative AI is making it very easy to spread misinformation online, but YouTube is trying to minimize that with a new policy. Google’s video-sharing platform now requires YouTubers to disclose AI content in their videos to avoid being demonetized.
This development comes through new YouTube guidelines that will be effective starting next month. As part of the new policies, YouTube content creators will have to disclose whether their videos include any realistic-looking content that has been altered or synthesized using artificial intelligence.
This means that if a video contains an AI-generated image or other type of content, such as deepfakes, it will have to be publicly disclosed. This is because people can, for example, easily make it look like someone said something else in a video even though they didn’t. The main objective of this policy is to stop creators from abusing the powers of AI.
YouTubers who fail to disclose AI-generated content will not only risk demonetization but will also have their videos removed or their account suspended if they repeatedly do so. They may also face suspension from the YouTube Partner Program among other penalties.
Videos including generative AI content will also show a prompt that will label it as “Altered or synthetic content. Sounds or visuals were altered or generated digitally.”
YouTube emphasizes that the label alone may not suffice. In instances where a video breaches community guidelines, it may be removed, irrespective of the creator’s disclosure.
Furthermore, YouTube has clarified that content created using its generative AI products and features will also bear a label.
Finally, users will have the option to request the removal of any AI-generated content on YouTube that they deem inaccurate. For instance, if the AI-generated content features an identifiable individual based on their face or voice, the company will take action to remove it.
Similarly, music partners will also have the ability to request the removal of AI-generated music content.