AEPD guidance on agentic AI and data protection: why existing processing activities need to be revisited
Published on 24th March 2026
Incorporating agentic AI into a process, service or product will often amount to a genuine change in data processing, requiring compliance to be reassessed
The Spanish Data Protection Agency (AEPD) has published its guidance on “agentic artificial intelligence from a data protection perspective”. The AEPD does not merely warn of general AI-related risks. Instead, it focuses on something more specific: where an agent can plan sub-tasks, consult memory, invoke tools, connect with third parties or carry out actions autonomously, the data lifecycle and the attack surface change. So do the requirements around governance, traceability and control.
Agentic AI: not just another technology layer
The guidance does not approach agentic AI as a standalone technology, but as a way of implementing, in whole or in part, personal data processing activities. Its use can change the way data is accessed, transformed, disclosed or retained.
In practical terms, projects internally presented as functional enhancements to an already deployed solution may in fact require a much deeper review of the record of processing activities, the categories of data processed, recipients, international data transfers, retention periods and security measures. The addition of agents therefore requires organisations to revisit processing activities that many had regarded as stable.
The chain of reasoning matters legally
The AEPD emphasises that understanding the chain of reasoning makes it possible to understand the data lifecycle. It is not enough to know what information enters the system and what output it produces. It is also necessary to understand which sources the agent consults, which memory it reuses, which tools it calls, what intermediate transformations it performs, and which data persists at the end of the process.
This has several implications for data protection. The first is traceability: if it is not possible to reconstruct, in sufficient detail, where the data has travelled, it will be difficult to justify compliance with the principles of data minimisation, proportionality or purpose limitation. The second is explainability: the organisation must be able to identify which parts of the output depend on particular sources, inferences or memories. The third is substantive risk control: without sufficient visibility over the chain of reasoning, it becomes much more difficult to detect compounded errors (for example, errors that worsen over successive reasoning steps) or unanticipated uses of personal data. Acting contrary to the guidance could ultimately leave an organisation unable to identify why and how personal data was processed, making it difficult to respond adequately to requests from supervisory authorities or data subjects.
Governance framework and external connections
Among the measures listed by the AEPD with the greatest practical value are the need for cross-functional governance involving business owners, IT teams, quality teams and the data protection officer; continuous evidence-based assessment, supported by clear metrics, benchmark testing and incident review; and the use of allowlists for services, restrictions on accessible tools, and controls over parameters and responses in each tool call.
Where agents use third-party tools or resources, new processors, sub-processors, independent controllers or even joint controllership scenarios may come into play. New persistent memories, new international data flows and new contractual or transparency obligations may also be triggered.
The AEPD stresses the need to review not only contracts, but also terms and conditions, privacy policies, terms of use and, where relevant, version or functionality changes affecting those services. In an agentic environment, a call to an external tool may, in practice, amount to a partial disclosure of the processing activity with legal relevance of its own. That review should be carried out by design and maintained throughout the system lifecycle, not only at the initial deployment stage.
Data minimisation and memory controls
The guidance is particularly demanding on data minimisation. The AEPD starts from a sensible premise: agents may tend to pursue efficiency through volume, by relying on more data, more context and more memory than is strictly necessary. It therefore reinforces the need to define access policies by processing activity, catalogue available data and its sources, and apply controls over the quality, provenance and consistency of the information used, among other measures.
Poor management of repositories, metadata or labels is not merely an internal housekeeping issue. It may lead the agent to process irrelevant personal information indiscriminately, reuse context beyond the original purpose, or access special categories of personal data without any genuine need. Data governance becomes a condition for the viability of the use case itself.
The AEPD also points out that data minimisation strategies should be complemented by controls over the memory of the agentic AI system. These include segregation by processing activity, case or user; separation between organisational memory and user memory; strict retention periods; and hygiene techniques for persistent memory.
Data subject rights
Organisations must be able to locate all elements that may fall within the scope of a request for access, rectification, erasure, restriction, objection or portability. This includes, where they contain personal data, prompts themselves and other intermediate elements generated during the process. Granular traceability is therefore not simply a technical aspiration; it is a practical condition for enabling organisations to comply with data subject rights.
Measures worth prioritising
The AEPD sets out a non-exhaustive list of measures. Design and control measures deserve particular attention:
- End-to-end traceability: implement mechanisms enabling personal data to be tracked throughout its lifecycle, identifying when, how and why information is processed.
- Repeatability mechanisms: establish methods to verify that the same input produces reasonably similar outputs.
- Strict update controls: assess, in a testing environment, the impact of updates on existing systems and roll out updates regularly, particularly where serious vulnerabilities arise.
- Sandboxing: create a controlled testing environment in which the organisation can test its systems using synthetic, non-personal data.
- Circuit breakers: build in mechanisms throughout the process capable of automatically stopping an agent’s execution if anomalous activity is detected.
- Agent calibration and alignment controls: consistent with the idea of repeatability, organisations should ensure that system changes do not alter agent behaviour in a way that misaligns it with the original purposes and policies governing its operation.
Organisations should also consider whether the introduction of the agent requires a data protection impact assessment to be carried out or updated.
Osborne Clarke comment
Deploying agentic AI without first redesigning the relevant processing activity, strengthening traceability, and clearly defining tools, memories, degrees of autonomy and responsibilities materially increases the risk of non-compliance. These projects should therefore be approached as coordinated compliance redesign exercises, rather than as merely technological decisions.