The Label on the Label
On AI disclosure, what transparency can and cannot fix, and why the question is structural
Earlier this year, a retro-pop band called The Velvet Sundown accumulated a million monthly listeners on Spotify. The music was catchy. The backstory was detailed. The band photographs groovy Everything about the project was coherent, appealing, and entirely generated by artificial intelligence — the songs, the images, the biography, the artist identity that listeners had been following and sharing and adding to playlists as though it were a real band finding its audience.
It was not a real band. It was a demonstration, or possibly an experiment, or possibly just a content strategy — the framing depends on whether you believe the people who built it had a point to make or a royalty pool to access. Either way, it worked, which is the significant thing. A million monthly listeners did not know, could not tell, and in most cases were not thinking to ask.
The music industry's response has been to propose labels. This is understandable. It is also insufficient, for reasons that are worth thinking through carefully, because the AI disclosure conversation is currently consuming a great deal of the sector's attention and is in danger of arriving at a solution that addresses the symptom while leaving the underlying condition untreated.
In September 2025, Spotify announced support for a new industry standard for AI disclosure in music credits, developed through DDEX — the standards body that manages the data exchange frameworks underpinning most of the global music distribution infrastructure.(1) The standard allows for granular disclosure: which elements of a track were AI-generated, whether vocals or instrumentation or post-production processing, rather than a binary flag that would classify a track with one AI-assisted reverb adjustment the same as a track generated entirely by a language model trained on other people's music without their consent.
This is a genuine improvement over nothing, and it is worth acknowledging as such. The DDEX standard, if adopted broadly, would give listeners information they currently do not have and that they have a reasonable claim to want. It would allow music supervisors, arts funding bodies, and cultural institutions that care about supporting human creativity to identify the music they are looking for. It would make visible a distinction that the current infrastructure renders invisible.
What it would not do is change the economics of AI music production relative to human music production. It would not prevent content farms from flooding platforms with algorithmically generated tracks that undercut independent artists in the functional music categories — sleep, focus, background, workout — that generate reliable streaming volume with listeners who are not paying close attention to what they are playing. It would not address the training data question: the fact that most commercial AI music generation systems were trained on recorded music created by human artists without their consent and without compensation, and that no disclosure standard retroactively resolves that appropriation.
The label tells you what is in the tin. It does not change what is in the tin, or address how it got there.
The Velvet Sundown case is instructive precisely because disclosure would not have prevented it. If the project had carried an AI disclosure label, a portion of the million monthly listeners might have made different choices. A portion would not have cared, or would have found the transparency interesting rather than disqualifying, or would have continued listening because the music served the function they needed it to serve regardless of its provenance.
This is not a cynical observation. It is a description of how music actually functions for most listeners most of the time. The majority of streaming is not attentive listening in which the listener is actively engaged with the humanity of the creative process. It is background music, mood regulation, accompaniment to other activities. For these functions, a well-made AI track is a functional substitute for a well-made human track. Disclosure labels do not change the functional equivalence. They provide information to listeners who are motivated to use it, which is a subset of the total listener population.
The CISAC study published in 2025 estimated that generative AI could put 24% of music creators' revenues at risk by 2028.(2) This figure reflects not primarily the risk that listeners will choose AI music over human music once informed of the difference — though that risk exists — but the risk that platforms will use AI-generated content to reduce their royalty obligations, that content farms will use volume production to occupy algorithmic recommendation slots, and that the general devaluation of recorded music as a category will accelerate as the marginal cost of producing new tracks approaches zero. None of these risks is addressed by disclosure.
The DDEX standard draws a partial parallel with Canada's MAPL system, which has governed Canadian content requirements in radio broadcasting since 1971.(3) MAPL — which assesses whether a track qualifies as Canadian content based on its Music, Artist, Performance, and Lyrics — created both a measurement framework and a mandatory support structure: radio stations must meet minimum Canadian content thresholds, which creates market demand for Canadian music that would not exist at the same level under purely commercial programming logic.
The parallel is instructive but also reveals the limitation of the disclosure approach on its own. MAPL works not because it labels Canadian music but because it mandates its presence. The label is the measurement mechanism for an obligation. Without the obligation, the label is just information — useful to some listeners, ignored by others, and insufficient on its own to sustain the domestic music ecosystem it was designed to protect.
An AI disclosure standard without accompanying obligations — without minimum thresholds for human-created content on platforms, without consent and compensation frameworks for training data, without royalty structures that distinguish between human and AI-generated music in their distribution logic — is the label without the mandate. It tells listeners what they are hearing. It does not change what the platform has an incentive to serve them.
The fundamental question is what it would take to make the platform's incentives align with the interests of the human artists whose work built the streaming ecosystem in the first place. This is the question that disclosure standards do not ask, and it is an ethical question that the current moment in the music industry makes unavoidable.
The training data issue is the most urgent unresolved element. AI music generation systems trained on recorded music without artist consent are a form of appropriation whose legal status is currently contested in multiple jurisdictions and whose ethical status should not be particularly contested at all. The music was made by specific people who made specific creative decisions and whose labour and artistry is now being used to train systems that compete with them, without their knowledge or compensation. The disclosure standard does not address this. Consent and compensation frameworks for AI training data would, and are being developed in some jurisdictions faster than others.
The royalty distribution question connects to this. If AI-generated tracks are permitted on major platforms under the same royalty framework as human-created music, the effect is to dilute the royalty pool available to human artists with content whose production cost approaches zero.(4) Some platforms have begun experimenting with differentiated treatment of AI content — lower placement priority, separate cataloguing, restricted eligibility for certain playlist and editorial features. These are partial measures, but they reflect the recognition that treating all audio content equivalently, regardless of its provenance, has consequences for the economics of human music creation that disclosure alone does not resolve.
The ownership question is different again. The argument for platform cooperative models — artist-owned and governed distribution infrastructure, with revenue distribution and content policy answerable to the artists who are members — is that it relocates the incentive structure entirely. A platform whose owners are the human musicians whose work it carries has a different relationship to the question of whether to admit AI-generated content than a platform whose owners are shareholders seeking return on investment. The policy is not a product decision to be optimised. It is a values question to be decided by the people most affected by it.
The Velvet Sundown accumulated a million listeners and then, when the deception became public, became a case study in the conversation about what disclosure standards should require. The more interesting version of the story is the one that was not a deliberate deception: the thousands of AI-generated tracks on major platforms right now, produced by content farms and individual users with access to generation tools, carrying no particular claim to human artistry and accumulating streams that reduce the royalty pool available to the human artists competing in the same categories.
These tracks do not need disclosure labels to exist. They exist because the platform economics make their existence rational, because the tools to produce them are accessible, and because the infrastructure for distinguishing them from human-created music at scale has not been built. Disclosure standards, when implemented, will make some of them visible. They will not make them less economically rational to produce.
The label on the tin is useful. It is not the same as changing what goes in the tin, or who gets to decide, or whose interests the tin is designed to serve.
Those are foundational, ethical questions. The industry is currently answering them with a sticker.
* * *
References and inspirations
1. The DDEX AI disclosure standard was announced in September 2025, with Spotify confirming support alongside other major platforms. DDEX (Digital Data Exchange) is the standards body that develops and maintains the data communication standards used across the music industry for rights management, royalty processing, and metadata. The granular disclosure framework — covering specific uses of AI in different production elements — was developed in consultation with labels, distributors, and technology companies.
2. CISAC (International Confederation of Societies of Authors and Composers), The Impact of Generative AI on the Music Industry, 2025. The 24% revenue risk figure is a projection based on modelled scenarios of AI adoption in music production and consumption, not a current measurement. The methodology and assumptions are available in the full report.
3. Canada's MAPL system has governed Canadian content requirements in radio broadcasting since 1971, administered by the CRTC (Canadian Radio-television and Telecommunications Commission). The four criteria — Music, Artist, Performance, Lyrics — require that at least two of the four be Canadian for a track to qualify as CanCon. The system has been credited with sustaining a domestic Canadian music industry in the face of US market dominance.
4. On royalty pool dilution through AI-generated content, see the analysis published by the Trichordist and Music Business Worldwide tracking the growth of non-human content on major platforms and its effect on per-stream royalty rates. The mechanism is straightforward: if total platform streams increase due to AI content without a proportional increase in subscription revenue, the per-stream rate falls for all rights holders including human artists.