- Published on December 18, 2024
- In AI News
The video-streaming giant has partnered with Creative Artists Agency (CAA) to control AI-generated content.
YouTube, the video streaming platform owned by Alphabet, has announced a partnership with the Creative Artists Agency (CAA) to provide some of the world’s leading artists with access to the technology to ‘identify and manage’ AI-generated content.
Notably, CAA represents some of the world’s most famous artists, such as Ariana Grande, Dua Lipa, Beyonce, Sabrina Carpenter, and Miley Cyrus. It also represents athletes from various sports, including basketball, hockey, and soccer. Other key figures include Devin Booker, Carlo Ancelotti, Kyle Walker, and Mathew Barzal.
YouTube plans to begin testing its ‘likeness management technology’ next year in collaboration with leading celebrity talent. The tool will make it easy to submit requests for content removal. The partnership will help YouTube gain insights and feedback regarding the effectiveness of these tools.
“Over the next few months, we’ll announce new testing cohorts of top YouTube creators, creative professionals, and other leading partners representing talent,” said YouTube.
In September, YouTube announced tools to curb malicious AI content. One such tool is a ‘synthetic-singing identification technology’ that allows artists to detect AI-generated content that simulates their voices.
In the earlier announcement, YouTube said, “We’re actively developing new technology that will enable people from a variety of industries—from creators and actors to musicians and athletes—to detect and manage AI-generated content showing their faces on YouTube.”
The advent of deepfake content online has left several celebrities perturbed. Earlier this year, a deepfake advertisement resembling Taylor Swift was in circulation and in another scenario, a woman from Arizona, USA, fell prey to a deepfake scam that simulated Oprah Winfrey.
There have been multiple initiatives to crack down on deepfake content from the internet. Earlier this year, OpenAI announced a ‘deepfake detector’ to a small group of researchers who work on preventing misinformation.
In September, IBM revealed that it is working with Reality Defender, a company that has the technology to detect manipulated voice, video and images. Srinivas Tummalapenta, CTO of IBM Security Services, said, “Bad actors have a low barrier to entry”.
“While technologies and integrators are available, many corporations haven’t allocated funds to address this problem,” he added.
Supreeth Koundinya
Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
February 5 – 7, 2025 | Nimhans Convention Center, Bangalore
Rising 2025 | DE&I in Tech & AI
Mar 20 and 21, 2025 | 📍 J N Tata Auditorium, Bengaluru
Data Engineering Summit 2025
May, 2025 | 📍 Bangalore, India
MachineCon GCC Summit 2025
June 2025 | 583 Park Avenue, New York
September, 2025 | 📍Bangalore, India
MachineCon GCC Summit 2025
The Most Powerful GCC Summit of the year
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.