Artificial intelligence | UK Regulatory Outlook October 2025
Published on 29th October 2025
UK: Clearview AI faces recognition case: Upper Tribunal rules that Clearview processing falls within GDPR | Government announces new AI regulatory sandboxes | DRCF call for views: agentic AI and regulatory challenges | EU: Commission launches AI Act Single Information Platform | Draft guidance and reporting template for serious AI incidents | Digital Omnibus: Simplifying EU rules on AI | EU launches its AI adoption strategies
UK updates
Clearview AI faces recognition case: Upper Tribunal rules that Clearview processing falls within GDPR
In a significant win for the Information Commissioner's Office (ICO), judgment has been handed down on the ICO's appeal on jurisdictional issues in the Clearview AI case.
Clearview AI is a US company (with no UK presence at the relevant time) which scraped billions of public images from the internet to create a searchable facial recognition database. The images are analysed and stored as facial vectors alongside relevant metadata with identity details. Clearview sells access to the database to clients (which include foreign states and intelligence services) for them to use it for national security and law enforcement purposes. Clearview's clients accessed the database by uploading a facial image to the Clearview system, which initiates a search of the database for images with the same or similar facial vectors, and issues a report with suggested matches.
In the underlying case back in 2022, the ICO had found that the Clearview's actions had breached the General Data Protection Regulation (GDPR), and had issued Clearview with an enforcement notice and a monetary penalty notice (effectively fining it £7,552,800).
Clearview appealed the decision to the First-tier Tribunal (FTT). The FTT agreed with Clearview that the ICO lacked jurisdiction in the case. It found that although the processing undertaken by Clearview was related to the monitoring of data subjects' behaviour in the UK by their clients, Clearview's part in the processing was outside the territorial scope of the GDPR because Clearview's clients' processing for national security and law enforcement activities were outside the material scope of the GDPR. See this Insight for background on the FTT findings.
The ICO appealed to the Upper Tribunal. The Upper Tribunal partly overturned the FTT, deciding that Clearview's activities:
- Did fall within the GDPR's territorial scope (Article 3), on two separate grounds. Firstly, because Clearview's own activities (collecting, storing, analysing) amounted to "monitoring", and secondly because those activities "related to" monitoring activities conducted by Clearview's clients.
- Were not excluded by the material scope provisions (Article 2). Those provisions might mean that activities by Clearview's clients were out of scope, but that exclusion did not permeate through to Clearview's own activities.
Welcoming the decision, the Information Commissioner said that "It is essential that foreign organisations are held accountable when their technologies impact the information rights and freedoms of individuals in the UK."
Unless Clearview were to seek and obtain permission to appeal this Upper Tribunal decision, the case will return to the FTT, who will consider the substance of Clearview's original appeal.
Government announces new AI regulatory sandboxes
Liz Kendall, the technology minister, has announced plans for "AI Growth Labs" (that is, regulatory sandboxes) to allow companies to test AI products in supervised real-world conditions with temporarily relaxed regulations to help accelerate AI development and deployment. The government says that Growth Labs will "deliver on the commitment" in the AI Opportunities Action Plan to "implement pro-innovation initiatives like regulatory sandboxes". They would initially "prioritise applications delivering maximum strategic value for the UK" in key sectors such as healthcare, use of agentic AI in professional services, transport and the use of robotics in advanced manufacturing.
Case studies mentioned include using AI to speed up the planning system, to autonomously interpret images from patient scans, and use of small robotic "micromobility" vehicles to make NHS deliveries.
The current proposal seems to involve:
- Primary legislation conferring on ministers the power to create sandboxes.
- Secondary legislation to exercise that power by setting up a series of sandboxes, each focused on a specific area of innovation.
- Time-limited, targeted modifications to particular sectoral regulations deemed to be hindering AI adoption.
- Licences which would impose innovation-specific safeguards, monitoring and restrictions on participating organisations.
- Using the pilots as a trail of regulations, with possible conversion of successful pilot modifications into permanent regulatory reforms.
The government is pondering questions such as whether there should be a single government-operated lab, or several regulator-operated labs, and how to balance flexibility with appropriate scrutiny. It is also considering using sandboxes for other technologies which might drive growth, such as quantum, advanced connectivity and clean energy.
The government has launched a call for evidence on the proposals, closing on 2 January 2026.
DRCF call for views: agentic AI and regulatory challenges
The Digital Regulation Cooperation Forum (DRCF) is inviting views on the regulatory challenges associated with the adoption of agentic AI. The DRCF brings together four UK regulators: the ICO, the Competition and Markets Authority (CMA), Ofcom and the Financial Conduct Authority (FCA).
The call for views is part of the DRCF's Thematic Innovation Hub, which aims to foster dialogue and surface insights from innovators and stakeholders. It wants to understand the practical challenges and regulatory uncertainties businesses are facing when developing or deploying agentic AI, asking questions such as:
- How do current regulations support or hinder innovation in agentic AI?
- Are there specific areas (for example, data protection, liability, consumer protection, competition, copyright) where clarity is most needed?
- Are there sector-specific concerns (for example, legal services, finance, telecoms) that should be considered?
The call for views closes on 6 November 2025.
The DRCF has also published takeaways from its one-year AI and Digital Hub pilot. During the pilot, the hub provided free, tailored informal advice to address "complex, cross-regulatory challenges" to 20 early-stage AI businesses and startups.
The key lesson, according to the DRCF, is that "regulation isn't a roadblock – it's a roadmap", stressing that compliance can save business time, attract investment and build trust.
EU updates
EU AI Act updates
Commission launches AI Act Single Information Platform
The European Commission has launched the AI Act Single Information Platform, aimed at supporting effective implementation of the AI Act.
The Single Information Platform will serve as a central hub where stakeholders can find all relevant information on the Act, navigate its content, understand how it applies and access tailored guidance on its implementation. The platform will include:
- The AI Act Service Desk, where organisations can submit questions on the AI Act, which will be reviewed by a team of expert professionals working in close cooperation with the AI Office.
- The AI Act Compliance Checker – a tool designed to help an organisation assess whether it is subject to the Act's legal obligations, and understand the steps needed to comply.
- AI Act Explorer – a tool designed to help users browse through different chapters, annexes and recitals of the Act in an intuitive way.
Draft guidance and reporting template for serious AI incidents
From 2 August 2026, the EU AI Act obligations for providers of high-risk AI systems will (depending on the nature of the AI system) begin coming into force. This will include the obligation (set out in Article 73) to report serious incidents to the market surveillance authorities of the Member States where that incident occurred.
The European Commission has published draft guidance on incident reporting, together with a draft reporting template. The guidance attempts to clarify the AI Act's definition of a "serious incident", breaking it down and analysing its constituent elements. The definition is:
"an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person, or serious harm to a person's health;
(b) a serious and irreversible disruption of the management or operation of critical infrastructure;
(c) the infringement of obligations under Union law intended to protect fundamental rights;
(d) serious harm to property or the environment."
Among other things, the guidance:
- Considers what is meant by the qualifier "serious" in the various parts of the definition.
- Looks at the different timeframes for reporting depending on the severity of the incident/malfunction (for example, whether it involves a death, or "widespread infringement").
- Confirms that AI systems used in sectors that already have an equivalent reporting regime (including financial services, medical devices and critical infrastructure) do not have to report under Article 73 unless the incident/malfunction is an infringement of fundamental rights.
- Emphasises that the Article 73 reporting obligations apply only to AI systems, not to AI models, and therefore do not overlap with the serious incident reporting obligation that applies to providers of "general-purpose AI models with systemic risk" under Article 55(1)(c).
The draft guidance and template are subject to consultation until 7 November 2025. It is unlikely that the outcome of the consultation will result in any fundamental changes to the draft guidance and form, so in practice many businesses will use the drafts to inform their internal compliance preparations.
Other updates
Digital Omnibus: Simplifying EU rules on AI
The European Commission has published a call for evidence, which closed on 14 October 2025, as part of research on how to simplify legislation in the upcoming Digital Omnibus, focusing on areas including AI, data and cybersecurity. See Data law section for data-related proposals.
The objective is to reduce administrative costs for compliance including ensuring "predictable and effective application of the AI Act, aligned with the availability of all the necessary support and enforcement structures".
The Digital Omnibus will include measures targeting problems and seeking simplification in various areas, including those aimed at the "smooth application of the AI Act rules", which will focus on:
- intervention to ensure the optimal application of the recently adopted rules;
- providing legal predictability to businesses that are about to apply the rules;
- implementation challenges to be identified in consultation, taking into consideration the needs of small mid-caps; and
- facilitating a smooth interplay with other laws.
There will be changes in related areas, including:
- Data Governance Act, Free Flow of Non-Personal Data Regulation and the Open Data Directive.
- Rules on cookies and other tracking technologies laid down by the ePrivacy Directive.
EU launches its AI adoption strategies
The Commission has published two documents aimed at boosting the creation and adoption of AI in the EU:
- The Apply AI Strategy, which is about encouraging EU countries to incorporate AI into their operations.
- The AI in Science Strategy, focussing on how to position the EU at the forefront of AI-powered science and scientific research
Apply AI Strategy
The strategy is designed to help industries and the public sector understand what AI can do, where it is effective and how it can bring competitive advantage. The hope is that it will enhance the competitiveness of strategic sectors, strengthen the EU's technological sovereignty and boost AI adoption and innovation particularly by SMEs. It will do this by encouraging organisations to adopt an "AI first" stance, that is, to always consider AI as a possible solution when making any strategic or policy decisions and to "buy European", and "buy open source", especially for public sector organisations. The Commission will provide about €1 billion to support the strategy.
AI adoption will be encouraged in 11 sectors:
- healthcare and pharmaceuticals;
- mobility, transport and automotive;
- robotics;
- manufacturing, engineering and construction;
- climate and environment;
- energy;
- agri-food;
- defence, security and space;
- electronic communications; and
- cultural, creative and media sectors.
There will be actions to improve the EU's tech sovereignty. The 250 plus existing European Digital Innovation Hubs will morph into "Experience Centres for AI", providing access to AI resources including AI Factories and AI Gigafactories, AI Testing and Experimentation Facilities and AI regulatory sandboxes.
There are also plans to coordinate AI governance, centring around a new Apply AI Alliance, a coordination forum where stakeholders can share opinions, papers and recommendations with the Commission and the wider AI community.
There will be a closely connected body, the AI Observatory, which will conduct political analysis and decision-making, track AI trends, and disseminate information about recent AI developments. The observatory will also produce "robust indicators" to enable it to assess the sectoral impact of AI.
AI in Science Strategy
The Commission says that AI "is profoundly transforming the way scientific research is conducted" and points out that other leading economies, including the UK, are investing heavily in AI for science. The EU plan is to boost AI use across all scientific disciplines, with a strategy centred around the new Resource for AI Science in Europe (RAISE), a virtual European institute to pool and coordinate resources for developing AI and applying it in science, and includes:
- Talent: measures to train, attract, retain in the EU highly skilled AI and science talent from outside the EU.
- Compute: €600 million from Horizon Europe to enhance access to computational power for science, providing dedicated access to the EU's planned network of AI gigafactories for EU researchers and startups.
- Research: doubling funding for AI in science.
- Data: Support for scientists to identify strategic data gaps and gather, curate and integrate new sources of data. This will provide the huge, high-quality datasets that are needed to train the AI models that will underpin use of AI in science. To this end, at the end of October the Commission intends to publish a Data Union Strategy to "better align data policies with the needs of businesses, the public sector and society".
The Commission's Joint Research Centre is also involved and has already published a report on AI's impact on science and research.