Opinion

Where are we on copyright and AI in the UK?

Where are we on copyright and AI in the UK
The UK government has published its Report and Economic Impact Assessment on the use of copyright works in the development of AI systems.

The report confirms that the UK government’s preferred option is no longer a broad exception to copyright infringement for text and data mining (TDM) with a possible “opt-out” for rights owners. Instead, the UK is adopting a “wait and see” approach. It will now allow industry-led licensing arrangements to develop, monitor global legal developments and further engage with stakeholders to explore best practices before considering any legislative changes. There will also be a further consultation this summer to explore a range of options for addressing the harm caused by deepfakes, including the possibility of a new digital replica or personality right.

Background

These publications1 follow the UK’s wide-ranging consultation on AI and copyright launched in December 2024 (see our previous post on AI and copyright for further details on this). They were accompanied by a written statement from Liz Kendall (secretary of state for science, innovation and tech) who emphasised that the UK government (the “government”) wishes to do what is right for the whole British economy rather than choosing between the UK’s creative industries and AI sector.

The consultation and subsequent stakeholder engagement have demonstrated a current lack of consensus on how to achieve competing objectives: the need for copyright owners to control and be fairly rewarded for their work, while providing AI developers with access to high quality training data to unlock AI-driven innovation and grow the economy. The UK government wishes to take its time to try to achieve both. 

The publications cover a broad range of issues regarding copyright in the context of AI:

  • Broader TDM Exception (with rights owner) opt-out
  • Copyright laws in other jurisdictions
  • What is next for AI and copyright in the UK?
  • AI-generated works
  • Digital replicas
  • Transparency of training data and labelling of AI-generated content
  • Licensing and independent creatives

Broader TDM exception with opt-out no longer preferred

The decision to drop the preference for a broader TDM exception with opt-out is not entirely unexpected. Most respondents to the consultation rejected this suggestion and the UK’s creative industry lobbied very strongly against it2. The creatives were concerned that the opt-out approach would lead to their works being used to train AI models against their wishes, reducing their ability to seek remuneration. Technology developers were also predicting that it could lead to high take-up of rights owner opt-outs, undermining the purpose of the exception to support their access to more training data. This could cause the UK’s regime to become more restrictive than other jurisdictions, ultimately discouraging the development of AI models in the UK.

The report also noted that some respondents had argued that a broad TDM exception would contravene the Berne Convention’s “three-step test”, a fundamental requirement of copyright law. It would, in any event, only apply to the right to make copies and would not exempt dataset providers from any infringement for distributing or communicating works to the public. 

On balance, however, the biggest challenge to the proposal was probably a practical one. Robust technical standards and mechanisms to monitor, implement and enforce the opt-out do not yet exist in the state of the art.

There will be no copyright law reform until it becomes clear that reform will meet the objectives for the UK economy. The key message is that the government will take its time to get this right. 

Copyright laws in other jurisdictions

The consultation notes how copyright exceptions in other countries, which are often considered to be broader than the UK’s, may in fact incorporate a number of restrictions. For example, Japan’s TDM exception does not apply to uses that would unreasonably prejudice the interests of the copyright owner and is limited to purposes that do not allow someone to enjoy the thoughts or sentiments expressed in the work.

The U.S. fair use principle is also subject to four factors, including most notably the effect of the use on the market for the original copyright work. It is noted that, in time, these exceptions could possibly be held to be narrower in scope than the broad exception proposed in the UK consultation. However, the lack of a broad data mining exception or fair use principle in UK law places the UK among the countries with greater protection for rightsholders, and less flexibility for AI developers. 

The report notes that greater clarity is expected in the coming year over the scope and effect of laws in other jurisdictions, particularly with litigation in the U.S.3 and Germany4, and the first CJEU reference5 on copyright infringement and AI training. (Please see our earlier article discussing the key implications from ongoing legal cases for copyright and AI). The government wishes to monitor these developments and notes that there may be some levelling of the international playing field as laws are clarified or changed.

So, what is next for AI and copyright in the UK?

There is no longer a preferred option, and the UK is adopting a “see how it goes” approach, while it monitors global market and legal developments. This unfortunately means continued uncertainty regarding the risks of using copyright materials in AI training. It will also be left to the courts to decide if UK copyright law can apply to systems trained outside of the UK. The Getty Images case provides the most recent guidance on our existing legislation (see our previous article on AI models trained abroad for further details) but an appeal on this issue is due to be heard before December 2026. 

The impact assessment notes6 that under a “do nothing” scenario, permission would usually be needed to copy protected works at different stages of AI training and development in the UK. There is already an increase in commercial licensing arrangements for use of copyright works in AI systems, including licensing of content created and owned by UK creators, and the market is expected to expand. The government expects the licensing market to be highly sensitive to the copyright regimes in other jurisdictions and thinks this may lead to greater licensing of UK content across national boundaries, without intervention. For this reason, the government does not propose to amend copyright law in respect of systems developed outside the UK at this time. Ultimately, it will observe, monitor and support emerging market-led licensing arrangements and only interfere if needed at a later stage. It considers there to be a chance of better AI systems, if AI developers adopt a targeted licensing approach and pay for better quality training data. 

There is a growing volume of data licensing transactions… with an increasingly nuanced spectrum of deals across the AI development lifecycle.

There is a growing volume of data licensing transactions… with an increasingly nuanced spectrum of deals across the AI development lifecycle.

This position aligns with a recent report published by the House of Lords Communications and Digital Committee “AI, copyright and the creative industries”. This called for the government to rule out any reform to copyright legislation that would remove the incentive to license copyrighted works for AI training, instead urging the government to focus on strengthening licensing and transparency regimes.

AI-generated works

The government proposes to continue to monitor the use and impact of copyright protection for works generated wholly by AI. In the absence of evidence of its ongoing value, it proposes to remove this specific type of protection, although there is no firm commitment when any change may be brought into effect. 

Copyright will continue to protect works created with AI assistance, i.e., if there is a sufficient element of human authorship in the work’s creation. 

Again this isn’t a surprise as the provision isn’t often used (there have been no AI cases to date), the UK courts have been generally willing to find some element of human creativity and there are internal inconsistencies with the originality requirement being wholly fulfilled by a computer (see our previous article on ownership of AI-generated content in the UK). The government is also conscious that other countries are leading on AI development without this type of protection. 

The next phase of work

There are a number of areas that have been identified to be the focus for the next phase of the government’s work.

1. Digital replicas

The report acknowledges that synthetic media can be a powerful tool for the creative industries but can also introduce the risk of harm if someone’s likeness is replicated without their permission. 

This is one area where there is a definite commitment. This summer, the government will launch a consultation to explore a range of options for addressing the harm caused by deepfakes, while protecting the potential of technology to support legitimate innovation. Whilst the current laws of passing off, defamation, online safety, and the UK GDPR have some potential to control digital replicas, the consultation will consider whether it would be beneficial to introduce a new digital replica or personality right.

We noted in our previous opinion that tackling digital replicas in an effective and efficient manner around the world is problematic. There is a complex patchwork of inconsistent legal measures that need to be analysed and applied to the circumstances of a particular deepfake. Clearer and broader protection in this area is to be welcomed.

2. Transparency of training data and labelling of AI-generated content 

Output labelling

The government recognises that it can be helpful for consumers to understand whether content has been made using AI (and this can also help protect against disinformation and harmful deepfakes). Responses to the consultation were generally in favour of labelling wholly AI-generated content with a more nuanced approach to AI-assisted works. However, the government has still not committed to any definite changes at this stage. For now, its preference is to work with industry to explore best practice and to continue to monitor international developments (in particular the EU’s labelling requirements7 and voluntary code of practice, due be finalised in June).

The only concrete commitment at this stage is to establish a taskforce to put forward proposals on best practice for labelling AI-generated content, with an interim report to be published in the autumn. This may inform any future legislation but there is no suggested timeframe at this stage. 

Input transparency 

There was support in the consultation for increased transparency on AI training materials, which can improve visibility for rights holders to understand and enforce rights and help to bridge the gap between rightsholders and AI developers. Again, however, the government is not introducing any changes at this stage and, instead, commits to working with industry experts to develop best practice. It also wishes to monitor the effects of transparency legislation in other countries (e.g. the EU8 and California9). The idea behind this delay is to allow common industry practice to develop around the world so the UK’s requirements, when they are finally introduced, will align with and complement those elsewhere. In turn, this may help to set global standards. 

Ultimately, this postpones the imposition of transparency requirements in the UK, but it does suggest that they will be introduced at some point. This delay is actually a pragmatic and practical one.

The technologies for proper transparency simply do not yet exist in the state of the art for labelling to be done for many types of (particularly text-based) output.

This is a barrier for multinationals trying to devise a consistent global approach to labelling and transparency. To overcome this barrier, they depend on AI developers creating a standard means for AI models to label outputs automatically, and in the meantime choose between non-compliance and costly and inefficient manual processes. 

It has also been confirmed that there is no proposal to introduce a new regulator (or impose new obligations on existing regulators) in relation to transparency or other measures at this time. 

3. Licensing and independent creatives

The government is committed to establishing a Creative Content Exchange (CCE), which is intended to be a trusted marketplace for digitised cultural and creative assets. A pilot phase has been launched with an early adopter cohort of public institutions. The government will launch a working group on independent and smaller creative organisations to explore whether there is a role for government to support their ability to license their content.

Developments in other jurisdictions

The government’s approach of market-led licensing, further monitoring of developments and consultation on best practices, can be contrasted with more decisive action and regulation in the EU and U.S. 

EU 

The European Parliament recently adopted a resolution on copyright and generative AI. This emphasised the need for transparency regarding training datasets, requiring AI providers to disclose detailed records of the copyrighted content used to train their systems and to maintain records of web crawling activities. 

There was also a focus on voluntary licensing and requiring permission to use copyright materials for AI training. While the EU still encourages machine readable opt-outs, where no opt-out has been exercised, there is a call for the European Commission to facilitate the establishment of collective licensing agreements per sector. The resolution emphasises that EU copyright law should apply to all generative AI systems available in the EU market, regardless of where the training takes place. However, as in the UK, content fully generated by AI should not be protected by copyright. 

United States 

The White House has also recently released a National Policy Framework for Artificial Intelligence, with the aim of introducing federal legislation and regulation concerning AI across the United States10

The framework makes it clear that the U.S. administration believes that training AI models on copyrighted material does not violate copyright laws. Nevertheless, it acknowledges that there are arguments to the contrary and will support the courts in resolving those issues. In the meantime, it suggests that Congress should consider enabling collective rights systems or licensing frameworks for rights holders collectively to negotiate compensation from AI providers, although legislation should not determine when or if licensing is legally required. Instead, Congress should monitor how the courts deal with copyright disputes and consider additional action if needed.

Like the UK, it encourages consideration of a federal framework to stop unauthorized AI-generated replicas of a person’s voice, likeness, or other identifiable attributes.

Comment and key takeaways

  1. While it is commendable that the government wishes any decision in this area to benefit all competing stakeholders, the lack of any firm commitment and the deferring of any legislative change, only prolongs uncertainty for stakeholders. Overall, there will be more working groups, discussion and further consultation but no concrete action, and no real timeline for purposive change.
  2. What is clear, at least for the near future, is that the government does not want to appear to favour either the UK creative or tech sector, who have clear competing interests. Instead, it is hoping that market forces and industry licensing practice will achieve a working balance. It may be that allowing creatives and technology companies to work together to reach suitable commercial arrangements is the most cohesive way forward. Pragmatically, this also allows more time for technological developments regarding, e.g., labelling of AI-generated content.
  3. Multinationals will appreciate efforts to monitor developments in other countries and moves towards the establishment of consistent, global standards in this area. Others, however, might prefer for the UK to be taking the lead as opposed to “watching and following” other jurisdictions who have moved quicker and more decisively.
  4. For now, the courts will be left to pass judgment when disputes do arise, but that will be based on existing laws, as opposed to any new AI-specific legislation. Nevertheless, the door is still open for legislative changes that are deemed necessary in the future.
  5. The abandonment (for now) of a broader TDM exception is a good result for the creative industry, who lobbied hard against its potential to cause them harm. However, it should be remembered that copyright owners are only in the position they were before the consultation. As demonstrated by the Getty v. Stability trial, there can be jurisdictional problems with proving copying of copyright works during AI training and, unless the Getty decision is overturned on appeal, AI models that are trained overseas but deployed in the UK may still avoid UK copyright infringement.
  6. Overall, until there is further clarification of the legal position in the UK, there is still an inherent risk of copyright infringement in training AI models. For now, AI developers will not benefit from a clear, broad legislative exception to train on copyright works in the UK and should therefore rely on contractual licensing to manage these infringement risks.

Footnotes

1. Which fulfil the UK government’s commitment under the Data (use and Access) Act 2025

2. A statement of progress published in Dec 2025 revealed that 88% of respondents favoured stronger copyright and licensing for AI training, including for models trained overseas.

3. Eg Thomson Reuters v. Ross Intelligence 

4GEMA v. OpenAI (case no. 42 O 14139/24) 

5. Like Company v. Google (C-250/25)

6. Page 7

7. Article 50 EU AI Act

8. Article 53 EU AI Act

9. Transparency in Frontier Artificial Intelligence Act—see our previous article on this landmark AI law for further details.

10. The framework isn’t binding and will need congressional support to be implemented. 

Related capabilities

subscribe

Interested in this content?

Sign up to receive alerts from the A&O Shearman on technology blog.