Artificial intelligence is remaking how music is made, distributed, and owned — and the industry has yet to agree on whether that is cause for celebration or alarm.
Key Takeaways
- AI music generation tools like Suno and Udio can produce full tracks from text prompts in seconds, raising urgent questions about authorship.
- The Recording Industry Association of America filed suit against multiple AI companies in 2024, citing large-scale copyright infringement in training datasets.
- Session musicians, composers, and voice artists are among the most economically vulnerable workers in the current AI transition.
- Some independent artists have begun using AI tools deliberately as part of their creative process, complicating the idea of a simple human-versus-machine divide.
- No country has yet passed comprehensive legislation specifically governing AI-generated music, leaving rights holders in a prolonged state of legal uncertainty.
Table of Contents
A Question of Authorship
When the musician Holly Herndon released her 2019 album PROTO, she trained a neural network on her own voice and those of her collaborators, creating what she called an AI "baby" named Spawn. The resulting record felt genuinely strange — choral, alien, and somehow intimate. It was also unmistakably hers. Herndon's experiment asked a question that the music industry has been scrambling to answer ever since: at what point does a machine's contribution become authorship, and what happens to the humans orbiting that process?
Five years later, the tools have grown considerably more powerful and considerably less artistically intentional. Platforms like Suno and Udio allow anyone to type a phrase — "melancholic lo-fi piano with a hint of jazz" — and receive a finished track within seconds. The gap between prompt and product is so small that it raises the foundational question not just of who made the music, but whether the concept of making still applies. The U.S. Copyright Office has so far declined to register purely AI-generated works, ruling that copyright requires human authorship. What constitutes sufficient human involvement, however, remains actively contested.
The Training Data Problem
Generative AI systems do not emerge from nothing. They are trained on vast repositories of existing material — recordings, scores, lyrics, stem files — and the question of whether that training constitutes a form of copying has become one of the central legal disputes of our era. In June 2024, the Recording Industry Association of America filed lawsuits against Suno and Udio, alleging that both companies ingested copyrighted recordings without licensing them, effectively laundering the labor of generations of musicians into a product from which those musicians would receive nothing.
The companies defended their position by invoking fair use doctrine, arguing that training a model is transformative rather than reproductive. Legal scholars remain divided. What is harder to dispute is the practical reality: a model trained on thirty years of R&B recordings can generate something that sounds, in texture and feeling, like the product of that tradition — without any of the individual artists who built that tradition receiving credit, compensation, or consent.
"The question isn't whether the output sounds like any one song. The question is whether a system can be built on the unconsented labor of thousands of artists and still be called original." — Kate Hyman, music attorney, speaking at the Future of Music Coalition Summit, 2024
Independent artists are particularly exposed. Major labels possess the legal resources to litigate and, in some cases, to negotiate licensing arrangements that exclude smaller players. A bedroom producer whose distinctive synthesizer palette is absorbed into a training dataset has virtually no mechanism for redress. The asymmetry is not incidental — it reflects the same structural imbalances that have characterized the streaming economy since its inception.
Whose Labor Disappears
The conversation about AI in music tends to center on recording artists, but the most immediate economic disruption is falling on a less visible class of workers: session musicians, orchestral contractors, jingle composers, voice-over singers, and the armies of producers who generate functional music for film, advertising, and video games. These are skilled professionals whose work has always been characterized by speed and craft — qualities that AI now approximates at a fraction of the cost.
Stock music libraries, which once employed human composers to build catalogs of licensable tracks, have begun integrating AI-generated content at scale. Platforms like Artlist and Epidemic Sound face pressure to reduce costs; AI offers a convenient path. The composers who built their livelihoods on those libraries have seen licensing income decline with little institutional acknowledgment that the shift is anything other than market efficiency at work.
Voice synthesis presents a parallel set of concerns. The likeness of a singer's voice — its grain, its vibrato, its emotional register — has historically belonged to that singer in some intuitive moral sense, even when the law has been slow to formalize that protection. Synthetic vocal models trained on specific artists have proliferated across social platforms, producing tracks that sound like deceased or living musicians. Tennessee became the first U.S. state to address this directly with the ELVIS Act in 2024, but most jurisdictions have yet to follow.
Artists Who Are Choosing It
The debate loses some of its clarity when one examines the artists who are actively choosing to incorporate AI tools into their practice — not as a shortcut, but as an extension of a longstanding interest in technology as a compositional partner. Arca, Bianca Oblivion, and various artists working within the ambient and experimental traditions have approached machine learning with the same curiosity they might bring to a new synthesizer or effects processor. For these musicians, the question is not whether AI belongs in music but how its particular qualities — its tendency toward the uncanny, its capacity to hallucinate genre — can be shaped into something meaningful.
There is a genuine and underappreciated difference between an artist who uses an AI system as one instrument among many and a platform that generates anonymous content at industrial scale for placement in advertising or background listening queues. Both involve the same underlying technology, but they represent radically different relationships to creative intention. Collapsing the two into a single debate obscures more than it illuminates. The musician who trains a model on field recordings she collected over three years is doing something categorically different from a company deploying generative AI to flood streaming platforms with royalty-free filler.
The Streaming Catalog Problem
In early 2024, Spotify removed tens of thousands of AI-generated tracks from its platform following reports that a distributor had used automated tools to manufacture streams — a scheme that effectively siphoned royalty payments away from human artists. The incident illustrated a problem that sits at the intersection of AI generation and streaming economics: when the cost of producing music approaches zero, the incentive to flood platforms with synthetic content becomes financially rational, even if the music itself serves no listener.
The royalty pool model that governs streaming payouts — in which a fixed percentage of revenue is divided among all streams on the platform — means that every AI-generated play dilutes the payments earned by human artists. This is not a hypothetical concern. Researchers at the music analytics firm Luminate estimated that artificial streaming inflated catalog numbers by a measurable percentage across multiple major platforms in 2023. The problem predates generative AI, but AI makes it cheaper and harder to detect.
Streaming platforms are responding with a mixture of detection algorithms and policy updates, but neither has proven sufficient. The fundamental tension is structural: platforms benefit from large catalogs and high stream counts, which creates a passive incentive to tolerate a certain degree of synthetic content even while nominally opposing it.
Toward a Working Framework
What the music industry currently lacks is a coherent framework — legal, ethical, and economic — for navigating AI's integration. Individual lawsuits establish precedents, but they do so slowly and expensively. Platform policies shift in response to public pressure rather than principled design. Legislative efforts remain fragmented by jurisdiction. Meanwhile, the technology continues to develop at a pace that makes any given regulatory proposal feel dated before it passes.
Some advocates have proposed a licensing model analogous to performance rights: a collective mechanism through which AI companies would pay into a fund distributed to artists whose work contributed to training datasets. The proposal faces obvious challenges of implementation — how does one determine whose music influenced a given model, and in what proportion — but it has the virtue of acknowledging that the relationship between AI output and human creative labor is not simply one of inspiration but of material derivation.
What seems clear is that the either/or framing of the debate — AI as pure tool or pure threat — serves no one well. The technology is neither a neutral instrument nor an autonomous agent. It is a system built by companies with particular economic interests, trained on cultural material produced by people with economic needs, and deployed in an industry already marked by profound inequality. Understanding AI's role in music requires holding all of those facts simultaneously, which is considerably harder than choosing a side.
What Remains Human
There is a version of this conversation that ends in resignation — a sense that the displacement of human musical labor is simply the latest chapter in a long history of technological substitution, from the player piano to the drum machine. But that history is more complicated than the narrative of inevitability suggests. The drum machine did not eliminate drummers; it changed what drumming meant and created new contexts in which human performance became more valuable precisely because of its contrast with the machine.
Whether AI follows a similar trajectory depends less on the technology than on the decisions made now, in courtrooms and legislatures and corporate boardrooms and the contracts that artists are asked to sign. Music has always been a negotiation between the human need to express and the material conditions that shape how expression is possible. The arrival of generative AI is an intensification of that negotiation, not its end.