Digital Omnibus Package: What would be the impact for the legal bases for personal data processing?
Published on 18th February 2026
This article was authored by Julia Kaufmann and Dr. Florian Eisenmenger.
On 19 November 2024, the European Commission (“Commission”) published two Digital Omnibus proposals (“Digital Omnibus Package”) aimed at "simplifying, clarifying and improving the EU's digital rulebook". The Commission's vision focuses on adapting the EU's regulatory framework to a more volatile and competitive world.
The first proposal, simply labelled as the Digital Omnibus, relates to the General Data Protection Regulation (“GDPR”), among other laws, while the second, labelled the Digital Omnibus on AI, relates to the EU AI Act (“AI Act”).
Expanding on our general overview of the Digital Omnibus Package available on our website, we are now diving deeper into the impact the Digital Omnibus Package would have on the legal bases for the processing of personal data.
Supporting AI innovation under the GDPR: Legitimate Interests, Sensitive Data, and Bias
Among the proposed amendments are welcome clarifications of, and supplementations to, the legal bases for processing of personal data, particularly in the context of AI development and operation. While the proposals do not constitute a fundamental policy reversal, they would provide more legal certainty in areas where current rules have proven difficult to apply in practice.
The legal bases addressed by the Digital Omnibus Package relate to the following aspects:
- clarification on legitimate interest in case of AI development and operation;
- new specific permission ground under Art. 9 GDPR for the processing of sensitive data for AI development and operation;
- new legal basis for processing biometric data for identity verification purposes; and
- expansion of the permission ground for the processing of sensitive data for AI bias detection.
Legitimate Interests for AI Development and Operation in Art. 88c GDPR (new)
In practice, reliance on Art. 6 (1)(f) GDPR in the context of AI development and operation often proves challenging, even though EU and various Member State supervisory authorities have acknowledged AI development and operation as a legitimate interest. In particular, the European Data Protection Board’s Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models offers concrete guidance on carrying out a legitimate interest assessment in the context of development and deployment of AI models, and the Discussion paper on legal bases for the use of Artificial Intelligence published by the German State supervisory authority of Baden-Wurttemberg contains similar guidance (German language only).
The proposed Art. 88c GDPR (new) shall provide that the processing of personal data in the context of the development and operation of an AI system or AI model (as defined by the EU AI Act, respectively) shall qualify as a legitimate interest of the controller within the meaning of Art. 6 (1)(f) GDPR, provided (i) such processing is necessary for such legitimate interest of the controller and (ii) such legitimate interest of the controller is not overridden by the interest of the data subject. Art. 88c GDPR (new) further clarifies that Art. 6 (1)(f) GDPR may only be considered a legal basis in the first place if EU or Member State law does not specifically require the data subject’s consent for processing personal data or otherwise prohibits such processing activity.
Most importantly, Art. 88c GDPR (new) would not prescribe a statutorily manifested overriding legitimate interest of the controller; Art. 88c GDPR (new) would also not pre-determine the outcome of any balancing of interest test in this context. Rather, the principles of Art. 6 (1)(f) GDPR would remain unchanged. Hence, Art. 88c GDPR (new) would only help with the establishment of a legitimate interest of the controller in the course of the first step of the three-tiered legitimate interest assessment. However, the controller will still be required to demonstrate based on the facts of the individual case (1) the necessity for the processing of personal data for AI development and operation and (2) the balancing of the opposing interests of controller and data subjects leading to an overriding legitimate interest of the controller.
Art. 88a GDPR (new) stipulates specific aspects of technical and organisational measures and other safeguards that will need to be taken into account as part of the balancing of interest test. Those specific aspects are:
- data minimisation measures when selecting data sources and during the training and testing of an AI system or AI model;
- measures to protect against non-disclosure of residually retained data in the AI system or AI model to ensure enhanced transparency to data subjects; and
- measures to ensure that data subjects are provided with an unconditional right to object to the processing of their personal data.
Recital 31 elaborates further that the balancing of interest test as part of the legal basis of Art. 6 (1)(f) GDPR in this context shall take into consideration (i) whether the interest pursued by the controller is beneficial for the data subject and society at large, e.g., for detecting and removing bias or for ensuring accurate and safe outputs, (ii) the reasonable expectation of the data subjects based on their relationship with the controller, (iii) appropriate safeguards to minimise the impact on data subjects’ rights such as providing enhanced transparency to data subjects, (iv) providing an unconditional right to object to the processing of their personal data, (v) respecting technical indications embedded in a service limiting the use of data for AI development by third parties, (vi) the use of other state of the art privacy preserving techniques for AI training and (vii) appropriate technical measures to effectively minimise risks resulting, for example, from regurgitation, data leakage and other intended or foreseeable actions.
The unconditional right to object to any AI-related processing of personal data would exceed the scope of the statutory right to object pursuant to Art. 21 GDPR since Art. 21 GDPR requires the data subject to assert grounds for the objection relating to the data subject’s particular situation and provides for an exception if the controller can demonstrate compelling legitimate grounds.
From a practical perspective, it will be challenging for companies to provide an unconditional right to object to the processing of personal data in an AI system or AI model. For example, once chatbots and AI agents become standard work tools, similar to business email and instant messaging today, companies will be unable to exclude personal data of certain employees that objected from such processing.
Processing Sensitive Data in the AI Context (Art. 9(2)(k) and Art. 9(5) GDPR)
The development of certain AI systems and AI models may involve the processing of large amounts of data, including sensitive data within the meaning of Art. 9 GDPR. Pursuant to recital 33 of the Digital Omnibus, such sensitive data may exist in the training, testing or validation data sets or be retained in the AI system or the AI model, although such sensitive data is not necessary for the purpose of the processing. In order not to disproportionately hinder the development and operation of AI, the Digital Omnibus proposes a new permission ground in Art. 9(2)(k) GDPR (new) for the processing of sensitive data.
Certain supervisory authorities have already taken the position that the use of AI can, for example, be considered necessary for the provision of health care services and thus be permitted by Art. 9(2)(h) GDPR. Some Member State lawmakers, too, have already sought to introduce national provisions allowing the processing of health data for AI‑based innovation and the Regulation on the European Health Data Space will also provide a legal regime to process health data for such AI-based innovation subject to strict limitations. However, the strict regime of Art. 9(2) GDPR is still often regarded as insufficient and as an obstacle to the development and operation of AI with sensitive data. Especially, where companies consider explicit consent of data subjects (Art. 9(2)(a) GDPR) as the sole reliable and risk-appropriate permission ground, the actual implementation of this permission grounds is often difficult, often times even impossible.
Art. 9(2)(k) GDPR (new) seeks to provide a specific permission ground for the processing of sensitive data in the context of development and operation of AI systems or AI models. Further conditions of this permission ground shall be set out in Art. 9(5) GDPR (new). Accordingly, the controller must demonstrate that (i) it was not possible to avoid the collection and subsequent processing of sensitive data in the data sets used for training, testing or validation through appropriate technical and organisational measures, (ii) it was – during the entire lifecycle of an AI system or AI model – not possible to identify and remove any sensitive data from such data sets with appropriate measures and proportionate effort, and (iii) measures are applied to effectively protect the residual sensitive data from being used to produce outputs, being disclosed or otherwise made available to third parties. These conditions shall, according to the recitals, include the condition of taking appropriate and proportionate measures to avoid the processing of sensitive data that may be residually retained in the AI model. The re-engineering of the AI system or AI model may, however, qualify as a disproportionate effort pursuant to recital 33 of the Digital Omnibus.
Only if the removal of such data requires a disproportional effort, the controller shall not be required to remove it. Such disproportional effort could, e.g., be assumed if the removal of the data in question would require the re-engineering of the AI system or AI model (see recital 33 of the Digital Omnibus). In such case, however, the controller must effectively protect such data without undue delay against being used to produce outputs, being disclosed or otherwise made available to third parties.
Overall, Art. 9(2)(k) GDPR (new) offers a welcome expansion of the legal grounds for processing sensitive personal data. However, it is currently unclear whether this permission ground shall merely apply to inadvertent and residual processing of sensitive personal data in the AI context or whether a company can also rely on 9(2)(k) GDPR (new) if the sensitive data are a core aspect of the processing activities, e.g. because an AI model shall be developed for the health care sector with health data.
Processing Biometric Data for Identity Verification (Art. 9(2)(l) GDPR)
The newly proposed Art. 9(2)(l) GDPR (new) is a rather minor amendment in comparison to other amendments proposed under the Digital Omnibus. It would allow the processing of biometric data if (i) it is necessary for confirming a data subject's identity and (ii) the biometric data or the means needed for verification are under the data subject’s sole control.
The Commission does not provide much background for this proposal, and its practical impact remains to be seen. However, it may help controllers carry out identity checks with less administrative and compliance burdens, for example in scenarios where data subjects confirm their identity remotely via their smartphone camera.
Expanded Data Processing for Bias Detection (Art. 4a AI Act)
Processing sensitive personal data in an AI context may pose significant privacy risks, but AI systems trained with certain sensitive personal data may in turn also protect natural persons from adverse effects of biases, such as discrimination. For this reason, Art. 10 (5) of the AI Act permits providers of high-risk AI systems to process sensitive data for the purpose of bias detection and correction as a matter of substantial public interest within the meaning of Art. 9 (2)(g) GDPR. This follows the rationale that sensitive personal data, such as ethnic origin, genetic data or health data, pose a significant risk for an AI system to generate biased output thereby producing harmful results. To ensure that such bias can be detected and corrected at the outset, it is necessary to process sensitive personal data for these purposes.
The risk of biased output is, however, not limited to high-risk AI systems, it may concern any AI system and model. Given the increasingly widespread use of AI, the Digital Omnibus on AI proposes that the currently existing exception in Art. 10 (5) AI Act shall be expanded to apply not only to high-risk AI systems, but to any other AI systems and AI models subject to the AI Act. To do so, Art. 10 (5) AI Act shall be transposed into a new Art. 4a AI Act (new) without significant modifications except for the expansion of applicability to any AI system and AI model under the AI Act.
The safeguards currently required by Art 10 (5) AI Act shall be retained by transposing them into Art. 4a AI Act (new), in particular:
- bias detection and correction cannot be effectively fulfilled by other, especially anonymized or synthetic data;
- the sensitive personal data used for these purposes are not transmitted, transferred or otherwise accessed by other parties; and
- the controller documents in its records of processing activities the reasons why it was necessary to process these data for bias detection and correction purposes and why this could not have been achieved with other data.
GDPR's general safeguards for processing sensitive personal data (such as strict access rights and data minimization or additional security measures) would also still apply. While the proposed amendment does not reduce the administrative and compliance burden to process sensitive personal data for bias detection and correction purposes, it still provides legal certainty for all developers of AI systems and AI models instead of only developers of high-risk AI systems. This highlights the importance of bias detection and mitigation as critical aspects of responsible AI development practices.
Conclusion
The Digital Omnibus Package is currently still subject to EU legislative procedure (see our “Speculative Timeline for AI Omnibus”).
The EDPB and the EDPS have also issued their joint opinions on those aspects of legal bases under the GDPR.
More discussions are necessary by the EU legislative bodies to agree on a compromise to the Digital Omnibus Package that shall become law.