Article

SpotifAI - AI-generated music and the unsettled legal landscape

|

6 minute read

AI-generated music is now commercially mainstream - but the legal framework is still catching up. Instead of clear rules, the industry is currently being shaped by a mix of litigation, private licensing deals, and early-stage regulation.

For rights holders, platforms, and developers, the core tension is clear: AI music is already being widely used and monetised, but the legal foundations remain uncertain. In this article, we explore the key legal developments and what they mean for businesses operating in or alongside the music industry.

Litigation: key questions remain unresolved

Most disputes about AI music come down to three fundamental questions:

  • does training AI models on recordings or compositions amount to “copying” under copyright law;
  • who (if anyone) is the “author” of AI-generated music; and
  • do outputs that resemble existing works give rise to infringement liability?

In the US, major record labels such as Universal, Sony, and Warner have sued AI platforms (including music-specific platforms such as Suno and Udio, and more recently, Anthropic’s Claude) over alleged unauthorised use of copyrighted music in training and concerns about infringing copies. Since then, some cases have been settled (notably those involving Udio and Warner), while others - particularly those involving Suno, Sony and Anthropic - remain ongoing. No final court decisions in the US have yet resolved the core legal questions around AI music and copyright. The fact that these cases have been brought against the AI model developers, rather than the end users generating the outputs, may suggest that liability for infringing outputs is more likely to rest with the developers - but it remains to be determined.

The UK’s litigation activity in this space lags behind that of the US. There remains no music-specific judgment in the UK, but the High Court’s decision in Getty Images v Stability AI offers important context. The Court found that the Stable Diffusion AI model itself was not an “infringing copy” because it did not store copies of the relevant copyright works. Crucially, however, the judgment did not decide the broader question of whether using copyrighted works for training amounts to copyright infringement under UK law. 

A further complication is jurisdiction. Where training occurs outside the UK, rights holders may face significant evidential and territorial hurdles in bringing successful claims. This highlights a practical tension between the inherently global nature of AI development and the territorial structure of copyright law. Businesses on both sides of the debate should be mindful of this gap.

In the UK context, legislation does provide some guidance on these questions. The Copyright, Designs and Patents Act 1988 (CDPA) explicitly recognises “computer-generated” works as capable of copyright protection, defining the author of such a work as “the person by whom the arrangements necessary for the creation of the work are undertaken”. However, in the absence of case law addressing how this provision applies to modern generative AI systems, it remains uncertain whether the user who enters a prompt, or the developer of the AI model, would qualify as the author under this definition.

A shift towards licensing

Rather than relying solely on litigation, major rights holders and technology companies are increasingly exploring licensing arrangements that permit AI training on music catalogues under agreed conditions. These deals often include elements such as licensed datasets, mechanisms for artist consent or control (sometimes on an opt-in basis), and forms of compensation or revenue-sharing, although the precise structure of these features continues to evolve and is not yet standardised across the industry. 

This marks an important shift in approach: from treating AI training as inherently infringing, to treating it as a licensable use, much like sampling or streaming, where usage is permitted but paid for. For businesses developing music AI tools, securing appropriate licences is fast becoming a way to mitigate legal risk and address reputational concerns.

In the UK, this direction has been reinforced by the Government’s recent decision not to proceed with a broad expansion of the text and data mining exception that would have permitted wider AI training subject to opt-out by rights holders. Instead, recent policy discussions have emphasised the role of licensing and commercial negotiation as a means of facilitating access to protected works. Unlike the broader fair use exceptions available under US copyright law, the UK’s fair dealing framework is much narrower, making licensing a particularly important approach to using copyrighted works.

However, the model remains complex. Key unresolved issues in such licensing deals include how to value music used in training, whether licences extend to vocal style or “sound-alike” replication, and how AI-generated outputs can be exploited beyond controlled environments. These are not merely technical questions - they go to the heart of how revenue is shared and risk is allocated across the AI music value chain. 

Regulation remains fragmented

Governments are responding, but there is no single, comprehensive regulatory framework yet. The result is a patchwork of measures at different stages of development across jurisdictions.

In the EU, the EU AI Act introduces transparency requirements for certain AI-generated or manipulated content, such as disclosure of deepfakes and labelling obligations in specific contexts, with enforcement action expected to commence later in 2026 once the remaining rules under the Act are implemented. In the US, there is still no comprehensive federal AI law. Instead, regulation is emerging at the state level, where lawmakers have focused on narrower but fast-growing risks such as voice cloning, deepfake impersonation, and the unauthorised use of a person’s likeness or voice.

In the UK, the Government has been reviewing how existing legal frameworks, particularly copyright law, data protection rules, and performer rights, apply to AI-generated content. A key policy focus has been the concept of “digital replicas”, including AI-generated voice or likeness simulations. The Government has consulted widely with creative industries on whether new, dedicated protections are needed. However, no fully implemented digital replica regime exists yet, and future rules remain under active development.

In parallel with government regulation, digital platforms and industry bodies are introducing their own governance mechanisms. Spotify, for example, has updated its policies to address AI-generated music by improving metadata transparency, cracking down on spam-like AI content uploads, and participating in industry efforts (such as standards developed through organisations like Digital Data Exchange) to improve labelling and attribution of music content.

Overall, the result is a multi-layered and evolving governance system, where legal regulation, platform policy, and industry standards are developing in parallel but remain inconsistent across regions and sectors.

What this means for your business

AI-generated music is not being stopped by litigation, nor fully regulated by statute. Instead, it is being shaped through a combination of legal pressure, commercial negotiation and platform governance. 

For businesses operating in this space, whether as rights holders, technology developers, platforms, or investors, the key challenge is no longer whether AI-generated music is lawful in principle. It is how to structure rights, manage risks and capture value in a system where rules are still being written. Proactive engagement with the emerging licensing and regulatory landscape is essential.

If you have questions about how these developments affect your business, or if you need advice on AI-related licensing, intellectual property strategy, or regulatory compliance, please do not hesitate to get in touch with our team.

Authors

Related topics

Like what you are reading?

Stay up to date with our latest insights, events and updates – direct to your inbox.

Related insights

How can we help you?

Browse our people by name, team or area of focus to find the expert that you need.