Even if you didn’t watch last weekend’s episode of Saturday Night Live, you still probably saw it. You may already even know what “it” I’m talking about: Timothée Chalamet, and other similarly-dressed cast members, booty-shaking in tiny little red undies. He was, the sketch goes, “an Australian YouTube twink turned indie pop star and model turned HBO actor Troye Sivan being played by an American actor who can’t do an Australian accent.” Chalamet and his cohort were Troye Sivan Sleep Demons, and they’d been haunting straight women all over the place. It was a funny bit and, ironically, the least nightmarish Sivan impression to come out this week.
On Thursday, Google DeepMind announced Lyria, which it calls its “most advanced AI music generation model to date” and a pair of “experiments” for music making. One is a set of AI tools that allow people to, say, hum a melody and have it turn into a guitar riff, or transform a keyboard solo into a choir. The other is called Dream Track, and it allows users to make 30-second YouTube Shorts using the AI-generated voices and musical styles of artists like T-Pain, Sia, Demi Lovato, and—yes—Sivan almost instantly. All anyone has to do is type in a topic and pick an artist off a carousel, and the tool writes the lyrics, produces the backing track, and sings the song in the style of the musician selected. It’s wild.
My freak-out about this isn’t a fear of a million fake Troy Sivan’s haunting my dreams; it’s that the most creative work shouldn’t be this easy, it should be difficult. To borrow from A League of Their Own’s Jimmy Dugan, “It’s supposed to be hard. If it wasn’t, everyone would do it. The hard is what makes it great.” Yes, asking a machine to make a song about fishing in the style of Charli XCX is fun (or at least funny), but Charli XCX songs are good because they’re full of her attitude, something that comes through even when she writes for other people, like she did on Icona Pop’s “I Love It.” To borrow again, from a sign hoisted during the Hollywood writers strike, “ChatGPT doesn’t have childhood trauma.”
Not that these tools have no use. They are, more than anything, meant to help cultivate ideas and, for Dream Track, “test new ways for artists to connect with their fans.” It’s about making new experimental noises for YouTube, rather than Billboard chart-toppers. As Lovato, who, along with other artists allowed DeepMind to use their music for this project, said in a statement, AI is upending how artists work and “we need to be a part of shaping what that future looks like.”
Google’s latest AI music toy comes at a tricky time. Generative AI creates something of a digital minefield when it comes to copyright, and YouTube, which Google owns, has been trying to handle both an influx of AI-made music and the fact that it has agreements with labels to pay when artists’ work shows up on the platform. A few months ago, when “Heart on My Sleeve”—an AI-generated song by “Drake” and “The Weeknd”—went viral, it was ultimately pulled from several streaming services following complaints from the artists’ label, Universal Music Group.
But even if, say, the manager of Johnny Cash’s estate isn’t seeking to stop AI-generated covers of “Barbie Girl,” the technology still presents a conundrum for artists: They can either work with companies like Google to create AI tools using their music, make their own tools (like Holly Herndon and Grimes have), push back and see whether copyright law applies to music made from AI models trained on their work, or do nothing. It’s a question seemingly every artist is thinking about right now, or at least getting asked about.
Earlier this week this question about how copyright applies to AI music got an interesting answer. On Wednesday, Ed Newton-Rex, the head of Stability AI’s audio team, posted on X that he was resigning from his position because he doesn’t agree “with the company’s opinion that training generative AI models on copyrighted works is ‘fair use.’” Stability AI spokesperson Ana Guillen said: “We thank Ed for his contribution to Stability AI and wish him all the best in his future endeavors.” Stability did not respond to a request for comment about the post. AI models using copyrighted works, and whether that constitutes fair use under US law, or any other law for that matter, is a hotly debated topic, one that’s testing the durability of copyright to handle artificial intelligence—and may determine the future of AI as a creative tool.
In his resignation message, Newton-Rex noted that one of the four factors used to determine whether something is fair use is the impact any work may have on its potential market value. Does the existence of Fake Drake impact the potential sales of Real Drake? Is Not Johnny Cash’s Taylor Swift cover something Cash and/or Swift should be compensated for? TBD, but, as Newton-Rex wrote: “Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on.”
There is also the matter of fans. Being able to use DeepMind’s Dream Track to make a goofy, TikTok-style T-Pain spoof has its appeal, a big one being that T-Pain is cool with it. Still, the aficionados who want a Sia track, by Sia, may not exactly want anything less than the next “Chandelier,” and AI can’t do that.
This all reminds me of Nipper. First captured in a painting in 1899, Nipper is the dog in the old RCA logo, the one listening to a windup cylinder phonograph. The painting is called His Master’s Voice, the point being that Nipper could hear something off of a record and think it was real. For more than a century, technology has improved audio quality, made it clearer and closer to an actual reproduction of what live music sounds like. Some may argue that digital will never sound as good as vinyl (guilty), but both have the goal of sounding authentic, like creativity captured in the moment. AI can do that, to a point, but may never be the same as hearing the voice of your favorite artist light up a track.