YouTube is launching an experimental new tool that lets users create short, original songs using AI-generated voice clones of popular musicians.
Dubbed “Dream Track,” the AI-based technology is initially available to a small group of creators, YouTube announced in a blog post last week. So far, nine artists have lent their voices to the project: Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain and Troye Sivan.
Dream Track will generate unique songs up to 30 seconds long for YouTube Shorts, which are short, vertical videos similar to those on TikTok and Instagram Reels. Users will enter a brief description of the sound they’re aiming for — such as “a ballad about how opposites attract, upbeat acoustics,” according to YouTube’s example — and select one of nine participating artists. Dream Track will then produce a snippet of a new song that matches these parameters, sung with the AI-generated voice of the selected artist.
“In this initial phase, the experiment is designed to help explore how technology can be used to create deeper connections between artists and creators and ultimately their fans,” according to the blog post.
The tool uses Lyria, Google DeepMind’s AI music generation model. Both YouTube and Google DeepMind are subsidiaries of the same parent company, Alphabet. The songs will have an embedded watermark that identifies them as AI-generated, even though it’s undetectable to the human ear, according to a blog post by Google DeepMind.
In statements released by YouTube, artists expressed support for the experiment — although some, like Charli XCX, noted they remain cautious about AI’s potential role in the music industry.
“AI will transform the world and the music industry in ways we don’t yet fully understand,” she said in a statement. “This experiment will offer a small glimpse of the creative possibilities that might be possible, and I’m interested to see what comes of it.”
AI-generated music has sparked some controversy in recent months. Earlier this year, a creator named Ghostwriter created a song called “Heart on My Sleeve” that featured AI-generated vocals mimicking artists Drake and the Weeknd. After the song went viral, the artists’ record label, Universal Music Group, successfully lobbied YouTube and other music streaming sites to remove the song due to copyright claims.
Earlier this month, YouTube also unveiled new guidelines for AI-generated content on the platform. The rules will require creators to label their realistic AI-generated videos as such; YouTube will also create a process for people to request that deeply fake videos be removed.
In the murky, still-evolving world of AI-generated music, artists, lawyers and music platforms are still trying to untangle the various ethical and legal implications of the technology. Where do copyright laws come into play, if at all? Who gets paid for AI-generated music? And what happens if the AI generates controversial lyrics that damage the reputation of the real artist?
“When we talk about creating vocals, it can be used to say something that’s the polar opposite of that person’s belief system,” BT, an American musician and DJ, told NPR’s Chloe Veltman in April.