YouTube is developing new tools to protect artists and creators from the unauthorized use of their likenesses. The company said on Thursday that new tech to detect AI-generated content using a person’s face or singing voice is in the pipeline, with pilot programs starting early next year.
The upcoming face-detection tech will allegedly let people from various industries “detect and manage” content that uses an AI-generated depiction of their face. YouTube says it’s building the tools to allow creators, actors, musicians and athletes to find and choose what to do about videos that include a deepfake version of their likeness. The company hasn’t yet specified a release date for the face detection tools.
Meanwhile, the “synthetic-singing identification” tech will be part of Content ID, YouTube’s automated IP protection system. The company says the tool will let partners find and manage content that uses AI-generated versions of their singing voices.
“As AI evolves, we believe it should enhance human creativity, not replace it,” Amjad Hanif, YouTube’s vice president of creator products, wrote in a blog post. “We’re committed to working with our partners to ensure future advancements amplify their voices, and we’ll continue to develop guardrails to address concerns and achieve our common goals.”
This article originally appeared on Engadget at https://www.engadget.com/ai/youtube-is-making-tools-to-detect-face-and-voice-deepfakes-191536027.html?src=rss