Likeness detection will flag possible AI fakes, but Google doesn’t guarantee removal.AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.
pull down to refresh
related posts
0 sats \ 1 reply \ @Sandman 21 Oct
Humm! But the new Youtube tool detects AI deepfakes using real faces. It flags fakes but doesn’t guarantee removal. Creators must share more personal info to activate protection. AI content is booming Google helped create it. so a full ban isn’t happening right?
reply
0 sats \ 0 replies \ @0xbitcoiner OP 21 Oct
right
reply