The Label on the Label
On AI disclosure, what transparency can and cannot fix, and why the question is structural
Earlier this year, a retro-pop band called The Velvet Sundown accumulated a million monthly listeners on Spotify. The music was catchy. The backstory was detailed. The band photographs had that particular quality of images that reward close inspection by generating new questions with each look. Everything about the project was coherent, appealing, and entirely generated by artificial intelligence — the songs, the images, the biography, the artist identity that listeners had been following and sharing and adding to playlists as though it were a real band finding its audience.
It was not a real band. It was a demonstration, or possibly an experiment, or possibly just a content strategy — the framing depends on whether you believe the people who built it had a point to make or a royalty pool to access. Either way, it worked, which is the significant thing. A million monthly listeners did not know, could not tell, and in most cases were not thinking to ask.
The music industry's response has been to propose labels. This is understandable. It is also insufficient, for reasons worth thinking through carefully, because the AI disclosure conversation is currently consuming a great deal of the sector's attention and is in danger of arriving at a solution that addresses the symptom while leaving the underlying condition untreated.
In September 2025, Spotify announced support for a new industry standard for AI disclosure in music credits, developed through DDEX — the standards body that manages the data exchange frameworks underpinning most of the global music distribution infrastructure.(1) The standard allows for granular disclosure: which elements of a track were AI-generated, whether vocals or instrumentation or post-production processing, rather than a binary flag that would classify a track with one AI-assisted reverb adjustment the same as a track generated entirely by a language model trained on other people's music without their consent.
This is a genuine improvement over nothing, and it is worth acknowledging as such. The DDEX standard, if adopted broadly, would give listeners information they currently do not have and that they have a reasonable claim to want. It would allow music supervisors, arts funding bodies, and cultural institutions that care about supporting human creativity to identify the music they are looking for.
What it would not do is change the economics of AI music production relative to human music production. It would not prevent content farms from flooding platforms with algorithmically generated tracks that undercut independent artists in the functional music categories — sleep, focus, background, workout — that generate reliable streaming volume with listeners who are not paying close attention to what they are playing. It would not address the training data question: the fact that most commercial AI music generation systems were trained on recorded music created by human artists without their consent and without compensation, and that no disclosure standard retroactively resolves that appropriation.
The label tells you what is in the tin. It does not change what is in the tin, or address how it got there.
The Velvet Sundown case is instructive precisely because disclosure would not have prevented it. If the project had carried an AI disclosure label, a portion of the million monthly listeners might have made different choices. A portion would not have cared, or would have found the transparency interesting rather than disqualifying, or would have continued listening because the music served the function they needed it to serve regardless of its provenance.
This is not a cynical observation. It is a description of how music actually functions for most listeners most of the time. The majority of streaming is not attentive listening in which the listener is actively engaged with the humanity of the creative process. It is background music, mood regulation, accompaniment to other activities. For these functions, a well-made AI track is a functional substitute for a well-made human track. Disclosure labels do not change the functional equivalence. They provide information to listeners who are motivated to use it, which is a subset of the total listener population.
The CISAC study published in 2025 estimated that generative AI could put 24% of music creators' revenues at risk by 2028.(2) This figure reflects not primarily the risk that listeners will choose AI music over human music once informed of the difference — though that risk exists — but the risk that platforms will use AI-generated content to reduce their royalty obligations, that content farms will use volume production to occupy algorithmic recommendation slots, and that the general devaluation of recorded music as a category will accelerate as the marginal cost of producing new tracks approaches zero. None of these risks is addressed by disclosure.
The DDEX standard draws a partial parallel with Canada's MAPL system, which has governed Canadian content requirements in radio broadcasting since 1971.(3) MAPL — which assesses whether a track qualifies as Canadian content based on its Music, Artist, Performance, and Lyrics — created both a measurement and a mandatory support structure: radio stations must meet minimum Canadian content thresholds, which creates market demand for Canadian music that would not exist at the same level under purely commercial programming logic.
The parallel is instructive but also reveals the limitation of the disclosure approach on its own. MAPL works not because it labels Canadian music but because it mandates its presence. The label is the measurement mechanism for an obligation. Without the obligation, the label is just information — useful to some listeners, ignored by others, and insufficient on its own to sustain the domestic music sector it was designed to protect.
An AI disclosure standard without accompanying obligations — without minimum thresholds for human-created content on platforms, without consent and compensation arrangements for training data, without royalty structures that distinguish between human and AI-generated music in their distribution logic — is the label without the mandate. It tells listeners what they are hearing. It does not change what the platform has an incentive to serve them.
The structural question is what it would take to make the platform's incentives align with the interests of the human artists whose work built the streaming sector in the first place. Disclosure standards do not ask this question, and it is the question that the current moment makes unavoidable.
The training data issue is the most urgent unresolved element. AI music generation systems trained on recorded music without artist consent are a form of appropriation whose legal status is currently contested in multiple jurisdictions and whose ethical status should not be particularly contested at all. The music was made by specific people who made specific creative decisions and whose labour and artistry is now being used to train systems that compete with them, without their knowledge or compensation. Consent and compensation arrangements for AI training data would address this, and are being developed in some jurisdictions faster than others.
If AI-generated tracks are permitted on major platforms under the same royalty arrangement as human-created music, the effect is to dilute the royalty pool available to human artists with content whose production cost approaches zero.(4) Some platforms have begun experimenting with differentiated treatment of AI content — lower placement priority, separate cataloguing, restricted eligibility for certain playlist and editorial features. These are partial measures, but they reflect the recognition that treating all audio content equivalently, regardless of its provenance, has consequences for the economics of human music creation that disclosure alone does not resolve.
The argument for platform cooperative models — artist-owned and governed distribution infrastructure, with revenue distribution and content policy answerable to the artists who are members — is that it relocates the incentive structure. A platform whose owners are the human musicians whose work it carries has a different relationship to the question of whether to admit AI-generated content than a platform whose owners are shareholders seeking return on investment. The policy is not a product decision to be optimised. It is a values question to be decided by the people most affected by it.
The Velvet Sundown accumulated a million listeners and then, when the deception became public, became a case study in the conversation about what disclosure standards should require. The more interesting version of the story is the one that was not a deliberate deception: the thousands of AI-generated tracks on major platforms right now, produced by content farms and individual users with access to generation tools, carrying no particular claim to human artistry and accumulating streams that reduce the royalty pool available to the human artists competing in the same categories.
These tracks do not need disclosure labels to exist. They exist because the platform economics make their existence rational, because the tools to produce them are accessible, and because the infrastructure for distinguishing them from human-created music at scale has not been built. Disclosure standards, when implemented, will make some of them visible. They will not make them less economically rational to produce.
The label on the tin is useful. It is not the same as changing what goes in the tin, or who gets to decide, or whose interests the tin is designed to serve. Those are structural questions, and the industry is currently answering them with a sticker.
Notes
DDEX AI disclosure standard, September 2025 — Music Business Worldwide. https://www.musicbusinessworldwide.com
CISAC, The Impact of Generative AI on the Music Industry, 2025. https://www.cisac.org
CRTC — Canada's MAPL content system. https://crtc.gc.ca/eng/television/cancon/r1.htm
Trichordist and Music Business Worldwide — royalty pool dilution analysis. https://thetrichordist.com
If you've made it this far, you probably care about where music is headed.
So do we — that's why we built something different. The Pack Music Co-operative is Australia's first musician-owned streaming platform: cooperative-governed, human-curated, and built on the radical premise that the people who make the music should own the infrastructure that distributes it.
Join the Pack — become an early adopter member, support our crowdfunding campaign, or lend your voice as an Ambassador: 👉 packmusic.au/join-the-pack
Back the campaign — every dollar goes directly toward getting us to launch: 👉 crowdfunding.startsomegood.com/thepackmusiccoop
Read our story — where we came from, why we built it, and what we believe: 👉 packmusic.au/who-we-are
Say hello — we genuinely want to hear from you: 👉 packmusic.au/contact