
by Google
YouTube is coming up with a new feature that will stir the waters of creativity. With the promise of greater transparency and authenticity of content, YouTube is introducing a new tool for labeling AI-generated material. Creators are being asked to clearly label such content. How will all this affect the digital world and creativity?
Today is a big day for YouTube because it introduces a brand new feature that opens the door to the future of video creation. This new feature will allow creators to mark if their videos contain AI-generated material or are synthetic.

When uploading and publishing a video, creators will see a small checkbox. They will have to let you know if their video contains "edited or synthetic" content that looks real. These are situations where it may seem like someone said or did something, but in fact they didn't. They can also involve editing footage to depict real events and places, or to present a "realistic-looking scene" that also never happened. YouTube gives examples like showing a fake tornado heading towards a real city or using a deepfake voice to make it seem like a real person is commenting on a video.
However, creators will not have to flag content that is clearly unrealistic, such as filters, special effects to blur the background, or animation that doesn't even attempt to look real.

In November, YouTube detailed its AI-generated content policy (click to our blog), creating two levels of policy: strict policies to protect music labels and artists, and loose guidelines for everyone else. Deepfake music, such as Drake singing Ice Spice songs or rapping over a song written by someone else, can be removed at the request of the artist’s label if they don’t like it.
The rules will require creators to flag AI-generated material, but there’s no clear-cut way to do that yet. And if you’re an average person who’s been the subject of a deepfake on YouTube, getting it removed could be a lot more complicated — you’d have to fill out a private complaint form that the company will review. Today, YouTube didn’t say much about the process, except to say it’s “continuing to update its privacy process.”

Like other platforms that have already implemented AI-generated content labeling, YouTube relies on a creator-honesty system. That means creators have to be honest and flag if their videos contain artificially generated material. YouTube spokesperson Jack Malon previously told The Verge that YouTube is "investing in tools" to recognize content generated by artificial intelligence. However, the AI detection software is known to be less than accurate.
A new blog from YouTube says it could add AI warnings to videos even if the creator hasn't done so. This could happen "especially if the generated content has the potential to confuse or mislead people." Videos that touch on sensitive topics like health, elections, and finance will see even more prominent in-video labels, similar to when a video contains a paid collaboration.

Creators who don't take the new rule to heart and label synthetic content as they should could face penalties. These include having their videos deleted or being temporarily suspended from the YouTube Partner Program, which allows creators to earn money.
