Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook February 2026

Published on 26th February 2026

UK: AI Opportunities Action Plan: one year on | AI and copyright: recent comments from the government | New criminal offence for creating non-consensual intimate deepfake images | Automated decision-making provisions under the DUA Act now in force | ICO's agentic AI report  | House of Commons Library report on AI content labelling | Government launches call for information on the development of secure AI computing systems | EU: EDPB and EDPS publish joint opinion on Digital AI Omnibus 

Artificial Intelligence brain cogs

UK updates 

AI Opportunities Action Plan: one year on 

One year has passed since the government published its AI Opportunities Action Plan, setting out 50 commitments for driving AI development and adoption across the UK. It has now published a paper reviewing the progress made in the first year. The government says that it has fulfilled its commitments in respect of 38 of the 50 actions, with detailed progress available via a public dashboard.  

Among the commitments delivered are: the designation of five AI "growth zones", establishment of the Sovereign AI Unit and a pilot phase of a creative content exchange (a marketplace to sell, buy, licence and enable access to digitised cultural and creative assets so they can be licensed at scale), publication of guidelines and best practices for making government datasets ready for AI to make it easier for businesses and researchers to use public sector data to build AI tools. The government also notes its ongoing work with regulators to accelerate AI in priority sectors and implement regulatory sandboxes. It highlights its call for evidence on AI growth labs that would allow companies to test AI products in supervised real-world conditions with temporarily relaxed regulations to help accelerate AI development and deployment – see this Regulatory Outlook for more information.  

The reform to the UK text and data mining regime is among the ongoing commitments. In December 2025, the government published a progress report on its work on AI and copyright, including initial feedback to the government's consultation on copyright and AI – see this Regulatory Outlook for more information. The government is required to publish a full report, considering the options set out in the consultation and impact assessment by 18 March 2026. 

AI and copyright: recent comments from the government 

The technology secretary, Liz Kendall, and the secretary of state for culture, media and sport, Lisa Nandy, were questioned on AI and copyright by the House of Lords Communications and Digital Committee on 13 January. 

Addressing the December report, Ms Kendall stated that the government is "having a genuine reset moment" to find an approach that would benefit both the creative industries and the potential opportunities presented by AI. Ms Nandy added that no decision had been made yet. 

Ms Kendall was asked whether the government will take a definitive position on its approach to AI and copyright in the final report due by 18 March. She responded that the government has not yet decided, but the lesson that it has learned from the consultation process was that having a preferred option was not the right approach. Ms Nandy added that the government appreciates the urgency around the issue, but it is not rushing into this and will take the time to work through this matter with the working groups. She also noted that there is currently no "workable opt-out proposal on the table". 

New criminal offence for creating non-consensual intimate deepfake images 

The regulations bringing into force the offence of creation – or requesting the creation – of non-consensual intimate images, including deepfakes, under the Data (Use and Access) (DUA) Act 2025 were made on 15 January and came into force on 6 February.  

The technology secretary, Liz Kendall, said that this will be made a priority offence under the Online Safety Act 2023; currently, only the offence of sharing intimate images without consent is a priority offence under the Online Safety Act, meaning that regulated services will have to take proactive steps to prevent users encountering such content, and not only take steps afterwards. 

(See the Digital Regulation section for information on the government's plans to strengthen online safety for children, including to close the legal loophole in the Online Safety Act as regards AI chatbots.)  

Automated decision-making provisions under the DUA Act now in force 

The DUA Act introduces a number of significant reforms to the UK General Data Protection Regulation (GDPR). The Data (Use and Access) Act 2025 (Commencement No 6 and Transitional and Saving Provisions) Regulations 2026, made on 29 January, bring many of these provisions into force, including the changes relating to automated decision-making. 

On 5 February, section 80 of the DUA Act replaced article 22 of the UK GDPR, which governs automated individual decision-making, including profiling. 

Article 22 of the UK GDPR essentially prohibited automated decision-making as the default position, with specific exceptions (for contractual necessity, where authorised by law or based on explicit consent). The amended approach softens these restrictions, including the introduction of a specific restriction for decisions involving special category data. Decisions involving special category data cannot be made solely through automated processing unless one of the following conditions is met: the decision is based on explicit consent, or the contractual or legal necessity test applies and substantial public interest grounds exist. This is more targeted than the previous regime, which applied restrictions to all automated decisions regardless of the type of data involved. 

Automated decision-making is also prohibited for "significant decisions" where the processing relies entirely on the recognised legitimate interests lawful basis, a new ground for processing introduced by the DUA Act. 

Significant decisions based solely on automated processing require controllers to ensure that appropriate safeguards are in place to protect data subjects' rights, freedoms and legitimate interests. These include informing data subjects about the decision, enabling data subjects to make representations and to obtain human intervention in relation to such decisions as well as contest such decisions.  

While notable restrictions continue to apply to the processing of personal data for automated decision-making, the new provisions afford controllers greater flexibility and provide enhanced legal clarity. 

See also the Data law section for more information on the DUA Act. 

ICO's agentic AI report  

The Information Commissioner's Office (ICO) has published a tech futures paper on "agentic AI", which is not formal guidance but is an "early thinking" foresight report. The ICO's latest tech paper signals how the ICO currently views the privacy and data protection implications of more autonomous AI systems that can plan, act and interact with other tools and agents. 

There are a few themes that are particularly relevant for technology and retail businesses. 

  • Purpose limitation against "more data equal better agent". Agentic systems typically perform better when given broad access to data and tools. The ICO stresses the continuing need for clear, specific purposes and strict data minimisation, supported by technical and configuration controls over what an agent can access and do.
  • Rapid generation and inference of new personal data. Agents will not just process existing data; they will infer and create new data and profiles at scale. The ICO highlights risks around profiling, cascading hallucinations and inaccurate outputs, and the possibility that agents may infer and use special category data, triggering article 9 (of the UK GDPR) conditions and consent challenges.
  • Complex multi-agent data flows and rights. As agents communicate with each other and multiple systems, information flows become harder to trace. This complicates the delivery of data subject rights (particularly access, rectification and erasure) and raises expectations around transparency, logging and accountability.
  • Amplification of existing generative AI issues. Many familiar themes recur, such as controller and processor roles, lawful basis, automated decision-making (ADM) and bias; but are intensified by the autonomy, scale and opacity of agentic workflows. 

The ICO also points to positive use cases, including agentic AI as "DPO agents"; privacy and personal information management agents; and to assist with data subject access requests, freedom of information requests handling and broader compliance. 

The ICO is actively seeking engagement from organisations developing or deploying agentic AI, ahead of a statutory code of practice on AI and ADM and updated guidance on ADM and profiling. 

House of Commons Library report on AI content labelling  

The House of Commons Library has published a briefing paper examining AI content labelling, including its purpose and functionality. The briefing provides examples of ways to label AI content, such as visible disclaimers and invisible machine-readable watermarks. The latter involves embedding "technical signals" in a piece of content that provide details of its origin or composition and can be detected by specialised algorithms. 

The briefing also examines the regulatory framework and company policies affecting AI content labelling in the UK. Currently, there is no UK legislation requiring AI-generated content to be labelled as such. The government's consultation on copyright and AI acknowledged the benefits of clear AI labelling, but noted that there are "technical challenges involved". 

In the EU, article 50 of the EU AI Act sets transparency rules for content produced by generative AI, and the European Commission is currently drafting a code of practice on marking and labelling AI content – see this Regulatory Outlook for more information. 

How the UK and EU approach the issue of content labelling and transparency, and how consistently they do so, will be important for organisations using AI across the continent. 

Government launches call for information on the development of secure AI computing systems 

The government has launched a call for information, closing on 28 February, to gather views on current capabilities and practical constraints in the development of secure AI computing systems. 

It has established a joint research programme between the Department for Science, Innovation and Technology, the AI Security Institute and the National Cyber Security Centre to support the development of secure AI infrastructure, which will enable the development and deployment of advanced AI models. 

This call for information is aimed at AI and cyber security sectors as well as wider industry and academia. 

Advertising Association publishes best practice guide for the responsible use of generative AI in advertising 

See the Advertising and marketing section.

EU updates 

EDPB and EDPS publish joint opinion on Digital AI Omnibus 

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have published a joint opinion on the European Commission's proposal for the Digital Omnibus on AI. The proposal seeks to simplify the implementation of certain rules under the EU AI Act – see this Regulatory Outlook for more information. 

The EDPB and the EDPS agree with the objective of addressing certain challenges relating to the implementation of the EU AI Act but stress that administrative simplification should not weaken the protection of fundamental rights.  

The opinion: 

  • Recommends limiting the ability of providers and deployers of any AI systems and models to process sensitive personal data for bias detection and correction to circumscribed situations where the risk of harm from bias is considered sufficiently serious in the context of non-high risk AI systems and models, and maintaining the standard of strict necessity currently applicable to the processing of special categories of personal data in relation to high-risk AI systems.
  • Supports the general aim of easing administrative burdens for businesses but advises against the proposed deletion of the obligation to register AI systems that fall under the categories listed as high-risk, even where providers think their systems are not high-risk.
  • Welcomes EU-level AI regulatory sandboxes but stresses that data protection authorities must oversee data processing in them under the EU GDPR, and the EDPB should have an advisory role and observer status at the European AI Board.
  • Calls for a clear delineation of the AI Office's role and clarification of the market surveillance authorities' function as administrative points of contact.
  • Recommends maintaining a duty for AI providers and deployers to ensure AI literacy among their staff.
  • Asks lawmakers to consider keeping the current implementation deadlines for certain rules, such as transparency requirements. If delays are adopted, they should be minimised to the extent possible. 

The EDPB and the EDPS have also published their joint opinion on the Digital Omnibus, which proposes amendments to the EU GDPR, the ePrivacy framework and the broader EU data legislative acquis. See this Insight for more details.  

View the full Regulatory Outlook

Interested in hearing more? Read all the articles in our Regulatory Outlook series

Expand
Receive Regulatory Outlook each month

A round-up of upcoming regulatory developments – straight to your inbox

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?