The 71 Percent

On what musicians actually use AI for, the debate nobody is having, and why honesty is the better argument

Let me tell you what musicians use AI for. Seventy-one percent use it for stem separation — the process of isolating individual instruments or vocals from a mixed recording. Forty-four percent use it for practice and skill development, primarily as a source of intelligent backing tracks and ear-training tools. Forty percent use it for backing track generation. Thirty-two percent use it for mixing and mastering assistance.

Current figures show that only twenty-four percent use it to generate a complete song from a text prompt.(1) But that is the use that dominates every public conversation about AI and music. And it’s the least common thing musicians actually do with the technology.

I find this data from the Moises and Water & Music survey of over 1,500 musicians genuinely interesting, not because it resolves the AI debate but because it identifies what the AI debate is actually about — and what it is not about. The debate is not about whether musicians use AI. We know they do.

The debate is about a specific use of AI: mass generation of synthetic content by non-musicians, trained on human creative work without consent, flooding distribution infrastructure and diluting the royalty pool that sustains the people who made the training data possible. These are different things. Conflating them has been confusing the conversation.

Stem separation is a technology that professional recording engineers have used for years. The AI iteration of it is faster, cheaper, and accessible to musicians who could not previously afford the hardware or software. A musician using stem separation to isolate a drum part for sampling, or to extract a vocal from a demo for reference, is doing something that sounds nothing like 'AI is replacing musicians.' It sounds like musicians using a better version of a tool that already existed.

The same is true for AI-assisted mixing and mastering. Mastering has always been a technical process that depends on trained ears and specialised equipment. An AI mastering tool that makes a reasonable first pass at EQ and compression does not replace the mastering engineer for professional releases. It gives the independent artist releasing music on a budget a starting point that is better than nothing. Again, this doesn’t sound much like 'AI is replacing musicians.'

Among musicians who earn money from their music, AI has increased income for 26% and decreased it for fewer than 4%. Seventy-eight percent of professional musicians are already using AI tools, versus 60% of hobbyists. Professionals spend twice as much on AI tools as hobbyists.(2) The picture that emerges is of a professional musician population that has quickly incorporated AI into its workflow the way it incorporated digital audio workstations, plug-ins, and recording software — as tools that change what is possible without replacing the judgment, knowledge, and creative intention of the person using them.

None of this resolves the problems that are material to the resilience of musicians and the music industry however. The training data problem is very real. The music generation systems that produce text-to-song outputs were trained on recordings made by human artists without their consent and without compensation. That the output of those systems might be used as a stem separation tool or a practice backing track does not retroactively make the training ethical. The consent problem exists regardless of which feature the musician is using.

The royalty pool dilution problem is also real. Deezer reported in 2025 that 34% of all new track uploads were fully AI-generated. Seven million AI tracks are being created every day by systems like Suno. These tracks compete for space in the same recommendation algorithms, the same playlists, and the same royalty pool as music made by human beings who trained for years to make it. The fact that a different set of humans is using AI for stem separation does not make the mass production of 100% synthetic content less damaging to the working musicians whose income depends on the royalty pool remaining uncorrupted.

The impersonation problem is real. Emily Portman, a folk singer who made no new album, discovered that a ten-track AI album had been uploaded under her name on Spotify, iTunes, and YouTube. The late Blaze Foley had 'AI schlock' appear on his verified artist page thirty-seven years after his death. Josh Kaufman, who played on Taylor Swift's Folklore, found a track attributed to him that he described as 'a Casio keyboard demo with broken English lyrics.' These are not problems created by musicians using AI for mixing assistance.

The conflation of these different uses under the single label 'AI in music' has produced a debate in which both sides frequently talk past each other, because they are addressing different things. The musician who uses AI for stem separation is not the content farm uploading 10,000 synthetic tracks to game the royalty pool. Treating them as equivalent doesn’t help to clarify the actual problem. It alienates the working musician who has already made a pragmatic accommodation with the available tools, and it makes the argument against mass synthetic content look like technophobia rather than an argument about consent and compensation.

Tatiana Cirisano at MIDiA Research has made the case that the 'AI bad, human good' binary fails to account for artists who are using AI to create genuinely new forms of expression that wouldn't exist without the technology — Portrait XO, Holly Herndon, Arca — and that critics of AI in music make the same mistake critics of every new creative technology have always made: looking for the new thing in the places where the old thing was found.(3)

As AI evolves it’s going to be necessary to hold space for an iterative conversation that acknowledges all of this. The Pack's position — human-creation, human curation — is not a statement about stem separation. It is a commitment about what kind of content the platform will support and whose interests the platform is designed to serve. Those are governance decisions, made by the members of the cooperative, about the conditions under which music is distributed to the community that subscribes to it.

A musician who uses AI for mixing assistance and then releases the resulting song on the Pack is releasing a song made by a human musician using available tools. The song is theirs. The tools are theirs to choose. What the Pack's governance structure addresses is different: whether the content on the platform is made by human beings for human listeners, or whether the platform's catalogue and recommendation infrastructure has been colonised by synthetic material generated at zero marginal cost for the purpose of extracting royalties.

These are not the same question and to us, keeping them distinct matters — for the quality of the argument, for the credibility of the platform, and for the working musicians who are already using AI as part of their practice and who do not need to be told that the tools they rely on are disqualifying.

The 71% are making music. The argument is about what happens to that music, not about the software they used to make it.


Notes

  1. Moises / Water & Music — Musician AI Usage Report, 2025. https://moises.ai/insights/musician-ai-report-water-and-music/

  2. Moises / Water & Music — ibid.

  3. Tatiana Cirisano / MIDiA Research — 'The Problem with the AI Versus Human Music Narrative', 2025. https://www.midiaresearch.com/blog/the-problem-with-the-ai-versus-human-music-narrative


Previous
Previous

Follow the Money

Next
Next

The Opt-Out Trap