Artificial intelligence | UK Regulatory Outlook March 2026
Published on 26th March 2026
UK government position on copyright and AI | Lords Communications and Digital Committee calls on government to reject opt-out model and strengthen creator rights | CMA expectations for businesses deploying AI agents | EU code of practice on marking and labelling of AI-generated content: second draft | Digital Omnibus proposal update | European Parliament calls for transparency, fair remuneration and new licensing rules | International data protection authorities issue statement on privacy risks of AI-generated imagery
UK updates
Government sets out position on copyright and AI
On 18 March 2026, the UK government published its report on copyright and AI. Key highlights include:
- No reforms to copyright law yet. The government is not introducing reforms to copyright law at this stage, stating that it "must take the time needed to get this right."
- Opt-out option. A broad copyright exception with an opt-out mechanism is no longer the government's preferred option. It proposes to gather further evidence on how copyright laws are affecting the development and deployment of AI.
- Transparency and labelling of AI-generated content. The government proposes to work with industry and experts to develop best practice on input transparency, with any outcomes informing potential future legislation, and to explore best practice on the labelling of AI-generated content.
- Licensing. The government proposes not to intervene in the licensing market at this stage, keeping market-led approaches under review, and to identify and assess further levers to support access to valuable datasets, including through the Creative Content Exchange.
- Computer-generated works. Stating that copyright should incentivise and protect human creativity, the government proposes removing the specific copyright protection for wholly computer-generated works, while confirming that copyright should continue to protect works created with AI assistance.
- Digital replicas. The government proposes to explore options to address risks arising from the growing use of AI-generated realistic impersonation, including considering whether a new personality right may be appropriate.
- Enforcement. The government proposes to continue working with law enforcement and the judiciary to ensure the UK enforcement framework remains fit for purpose, undertake further work to identify and address enforcement barriers, and consider the case for regulatory oversight of transparency or other measures if legislation is introduced. No new regulator is proposed at this stage.
For more on this, see this insight.
Lords Communications and Digital Committee calls on government to reject opt-out model and strengthen creator rights
Before the government's publication of its report on copyright and AI (see above), the House of Lords Communications and Digital Committee has published its report, "AI, copyright and the creative industries", as part of its inquiry into AI and copyright.
It has called on the government not to introduce a new commercial text and data mining exception with an opt-out model, and instead focus on strengthening licensing, transparency and enforcement within the existing framework. It recommended that the government publishes a final, evidence-based decision on its approach to AI and copyright in the next 12 months, which should "make clear that strong copyright protection and fair licensing for UK rightsholders are the default".
The committee's other recommendations to the government include:
- Digital identity protection: introducing safeguards against unauthorised digital replicas and harmful "in the style of" AI outputs, giving creators and performers control over commercial use of their identity.
- Training data transparency: establishing a mandatory transparency framework for large AI developers regarding training data.
- Fair licensing market: fostering a sustainable licensing ecosystem for rightsholders and developers of all sizes, and exploring mechanisms to ensure that remuneration reaches individual creators.
- Technical standards for control, provenance and labelling: supporting the creation and adoption of open, interoperable and globally aligned technical standards for rights reservation, data provenance and the labelling of AI-generated content, and being prepared to legislate where necessary to ensure effective implementation.
- Sovereign AI models: focusing sovereign AI efforts on the development of AI models with transparency built in by design and respect for copyright.
CMA sets out consumer law expectations for businesses deploying AI agents
EU updates
EU code of practice on marking and labelling of AI-generated content: second draft published
In November 2025, the European Commission began work on a voluntary code of practice on marking and labelling AI-generated content. The code aims to help providers and deployers comply with transparency obligations under Article 50 of the EU AI Act. A first draft of the code was published in December 2025, and the Commission has now published a second draft.
The second version has been drafted by independent experts, integrating feedback from industry, academia, civil society, Member States and members of the European Parliament. The Commission states that the new draft has been "streamlined and simplified, providing more flexibility for the signatories, reducing the compliance burden and incorporating further technical considerations to improve legal clarity and practicality". It promotes the use of open standards for AI content marking and an EU icon for labelling, with the aim of simplifying compliance and reducing costs.
The Commission is collecting feedback on the second draft from participants and observers to the code until 30 March 2026. It is expected to be finalised by the beginning of June this year, and the transparency obligations are set to become applicable on 2 August 2026 (subject to the changes proposed by the Digital Omnibus on AI).
Digital Omnibus proposal: progress update
Discussions among EU institutions on the Digital Omnibus Regulation, the European Commission's proposal to make significant changes to the EU GDPR and other data legislation, are ongoing.
Separately, the EU legislative procedure on the Digital Omnibus on AI, which proposes changes to the EU AI Act, is progressing rapidly. The Council of the EU has adopted its position and the European Parliament is close to finalising its own.
See Osborne Clarke's Digital Omnibus microsite for the latest updates.
European Parliament calls for transparency, fair remuneration and new licensing rules
The European Parliament has adopted a resolution outlining a series of recommendations on protecting copyrighted creative work in the age of generative AI. It calls for a supplementary legal framework to clarify licensing rules for copyrighted material used in generative AI, address potential infringements and ensure effective cooperation between AI providers and rights holders.
The Parliament states that rightsholders, particularly in the press and news media sector, should be able to exclude their work from being used in AI training, and highlights the importance of full transparency. It calls on the Commission to explore mechanisms to ensure fair compensation from generative AI providers and to facilitate voluntary sector-level collective licensing agreements. It also considers that content fully generated by AI that does not meet the established criteria for copyright protection should remain ineligible for such protection.
International updates
International data protection authorities issue statement on privacy risks of AI-generated imagery
International data protection authorities, including the UK's Information Commissioner's Office and the European Data Protection Board, have published a joint statement on AI-generated imagery and the protection of privacy.
The statement addresses concerns about AI systems that generate realistic images and videos of real people, including non-consensual intimate imagery and defamatory depictions, without their knowledge. While noting that specific legal requirements vary by jurisdiction, the statement sets out fundamental principles for organisations developing and using AI content generation systems, including:
- Implementing robust safeguards to prevent misuse of personal information and generation of non-consensual intimate imagery, particularly involving children.
- Ensuring meaningful transparency about AI system capabilities, safeguards, acceptable uses and consequences of misuse.
- Providing effective and accessible mechanisms for individuals to request prompt removal of harmful content involving their personal information.
- Addressing specific risks to children through enhanced safeguards and age-appropriate information.
The statement urges organisations to engage proactively with regulators and implement safeguards from the outset.