Udio and Suno are not, despite their names, the hottest new restaurants on the Lower East Side. They’re AI startups that let people generate impressively real-sounding songs — complete with instrumentation and vocal performances — from prompts. And on Monday, a group of major record labels sued them, alleging copyright infringement “on an almost unimaginable scale,” claiming that the companies can only do this because they illegally ingested huge amounts of copyrighted music to train their AI models.
These two lawsuits contribute to a mounting pile of legal headaches for the AI industry. Some of the most successful firms in the space have trained their models with data acquired via the unsanctioned scraping of massive amounts of information from the internet. ChatGPT, for example, was initially trained on millions of documents collected from links posted to Reddit.
These lawsuits, which are spearheaded by the Recording Industry Association of America (RIAA), tackle music rather than the written word. But like The New York Times’ lawsuit against OpenAI, they pose a question that could reshape the tech landscape as we know it: can AI firms simply take whatever they want, turn it into a product worth billions, and claim it was fair use?
“That’s the key issue that’s got to get sorted out, because it cuts across all sorts of different industries,” said Paul Fakler, a partner at the law firm Mayer Brown who specializes in intellectual property cases.
What are Udio and Suno?
Both Udio and Suno are fairly new, but they’ve already made a big splash. Suno was launched in December by a Cambridge-based team that previously worked for Kensho, another AI company. It quickly entered into a partnership with Microsoft that integrated Suno with Copilot, Microsoft’s AI chatbot.
Udio was launched just this year, raising millions of dollars from heavy hitters in the tech investing world (Andreessen Horowitz) and the music world (Will.i.am and Common, for example). Udio’s platform was used by comedian King Willonius to generate “BBL Drizzy,” the Drake diss track that went viral after producer Metro Boomin remixed it and released it to the public for anyone to rap over.
Why is the music industry suing Udio and Suno?
The RIAA’s lawsuits use lofty language, saying that this litigation is about “ensuring that copyright continues to incentivize human invention and imagination, as it has for centuries.” This sounds nice, but ultimately, the incentive it’s talking about is money.
The RIAA claims that generative AI poses a risk to record labels’ business model. “Rather than license copyrighted sound recordings, potential licensees interested in licensing such recordings for their own purposes could generate an AI-soundalike at virtually no cost,” the lawsuits state, adding that such services could “[flood] the market with ‘copycats’ and ‘soundalikes,’ thereby upending an established sample licensing business.”
The RIAA is also asking for damages of $150,000 per infringing work, which, given the massive corpuses of data that are typically used to train AI systems, is a potentially astronomical number.
Does it matter that AI-generated songs are similar to real ones?
The RIAA’s lawsuits included examples of music generated with Suno and Udio and comparisons of their musical notation to existing copyrighted works. In some cases, the generated songs had small phrases that were similar — for instance, one started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs. Others had extended sequences of similar notation, as in the case of a track inspired by Green Day’s “American Idiot.”
One track started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs
This seems pretty damning, but the RIAA isn’t claiming that these specific soundalike tracks infringe copyright — rather, it’s claiming that the AI companies used copyrighted music as a part of their training data.
Neither Suno nor Udio have made their training datasets public. And both firms are vague about the sources of their training data — though that’s par for the course in the AI industry. (OpenAI, for example, has dodged questions about whether YouTube videos were used to train its Sora video model.)
The RIAA’s lawsuits note that Udio CEO David Ding has said the company trains on the “best quality” music that is “publicly available” and that a Suno co-founder wrote in Suno’s official Discord that the company trains with a “mix of proprietary and public data.”
Fakler said that including the examples and notation comparisons in the lawsuit is “wacky,” saying it went “way beyond” what would be necessary to claim legitimate grounds for a lawsuit. For one, the labels may not own the composition rights of the songs allegedly ingested by Udio and Suno for training. Rather, they own the copyright to the sound recording, so showing similarity in musical notation doesn’t necessarily help in a copyright dispute. “I think it’s really designed for optics for PR purposes,” Fakler said.
On top of that, Fakler noted, it’s legal to create a soundalike audio recording if you have the rights to the underlying song.
When reached for comment, a Suno spokesperson shared a statement from CEO Mikey Shulman stating that its technology is “transformative” and that the company does not allow prompts that name existing artists. Udio did not respond to a request for comment.
Is it fair use?
But even if Udio and Suno used the record labels’ copyrighted works to train their models, there’s a very big question that could override everything else: is this fair use?
Fair use is a legal defense that allows for the use of copyrighted material in the creation of a meaningfully new or transformative work. The RIAA argues that the startups cannot claim fair use, saying that the outputs of Udio and Suno are meant to replace real recordings, that they are generated for a commercial purpose, that the copying was extensive rather than selective, and finally, that the resulting product poses a direct threat to labels’ business.
In Fakler’s opinion, the startups have a solid fair use argument so long as the copyrighted works were only temporarily copied and their defining features were extracted and abstracted into the weights of an AI model.
“It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”
“That’s how computers work — it has to make these copies, and the computer is then analyzing all of this data so they can extract the non-copyrighted stuff,” he said. “How do we construct songs that are going to be understood as music by a listener, and have various features that we commonly find in popular music? It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”
“To my mind, that is a very strong fair use argument,” said Fakler.
Of course, a judge or a jury may not agree. And what is dredged up in the discovery process — if these lawsuits should get there — could have a big effect on the case. Which music tracks were taken and how they ended up in the training set could matter, and specifics about the training process might undercut a fair use defense.
We are all in for a very long journey as the RIAA’s lawsuits, and similar ones, proceed through the courts. From text and photos to now sound recordings, the question of fair use looms over all these cases and the AI industry as a whole.