European Commission consultation shapes high-risk AI classification for life sciences in the EU
Published on 19th June 2025
Stakeholder input will influence guidelines for AI Act implementation in healthcare and medical technologies

As the countdown to the EU AI Act’s high-risk obligations progresses, the European Commission has launched a consultation that could steer regulatory revisions for the life sciences sector.
Running from 6 June to 18 Jul, this targeted initiative seeks to demystify how AI systems will be classified as “high risk” in practice. The results will directly influence forthcoming Commission guidelines – expected by February 2026 – offering vital clarity for pharma, medtech and digital health organisations deploying or providing high-risk AI ahead of the August 2026 deadline.
The AI Act’s high-risk approach
The AI Act, in effect since 1 August 2024, establishes a harmonised legal framework for AI across the EU. Its central aim is to foster trustworthy, human-centric AI while safeguarding health, safety and fundamental rights. The AI Act employs a risk-based classification, with the most stringent requirements reserved for high-risk AI systems. These are defined either as safety components of products subject to EU harmonisation legislation listed in annex I – such as the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) – or as standalone systems deployed in high-risk areas detailed in annex III, including various healthcare applications.
The consultation document confirms that the Commission is required to issue guidelines on the practical implementation of high-risk classifications by 2 February 2026, including a comprehensive list of use cases that are and are not high-risk. This is particularly relevant for life sciences, where the distinction between embedded AI (for example, in a medical device) and standalone AI (such as clinical decision support software) can determine the applicable regulatory pathway, which in turns may impact launch or deployment timelines.
High-risk AI in healthcare and life sciences
The relevance of high-risk AI systems in healthcare is underscored by the AI Act’s explicit inclusion of medical devices and IVDs in annex I, and healthcare use cases in Annex III. The legislation recognises AI's transformative potential in diagnostics, treatment planning, patient monitoring and research, but also highlights the risks to patient safety and fundamental rights if such systems malfunction or are misused.
For example, an AI-powered diagnostic tool embedded in an MRI scanner is likely to be classified as high-risk under article 6(1), as it functions as a safety component of a regulated medical device. Similarly, a standalone AI application that supports clinical decision-making or manages patient data may fall under article 6(2) and annex III, depending on its intended purpose and its impact on health outcomes.
The AI Act also provides for exemptions under article 6(3), where an AI system’s use does not materially influence decision-making or only performs narrow preparatory tasks, but these exemptions are narrowly construed and must be carefully documented and registered.
The consultation seeks practical examples from stakeholders to clarify these boundaries, particularly for systems that may straddle both categories or where the risk profile is ambiguous. This is crucial for companies developing or deploying AI tools for clinical trials, patient management or laboratory automation, as misclassification could lead to unnecessary compliance burdens or regulatory gaps.
Key points from the consultation
The consultation is structured to gather stakeholder views and practical examples across five sections: classification rules for high-risk AI under article 6(1) and annex I, classification under article 6(2) and annex III, general questions on high-risk classification, requirements and obligations for high-risk AI systems and value chain actors, and potential amendments to the list of high-risk use cases and prohibited practices.
Several points are particularly salient for life sciences.
The definition of “safety component” is explored in detail, with the consultation seeking input on whether AI systems used for maintenance monitoring, prevention or mitigation of harm should be included. This could affect, for instance, IVD manufacturers whose AI systems monitor equipment performance or predict failures.
The interplay between high-risk AI requirements and other EU legislation, such as the MDR and IVDR, which are currently being evaluated by the Commission as part of a targeted evaluation, is also a recurring theme.
The consultation further recognises the need for clarity on how data governance, conformity assessment, and post-market surveillance obligations will be coordinated, particularly where AI systems are already subject to sectoral regulation.
The consultation also addresses the responsibilities of providers and deployers along the AI value chain, including the concept of “substantial modification” and the allocation of obligations when AI systems are updated, integrated, or repurposed after initial market placement. Digital health companies may find this particularly pertinent due to the constant evolution of software and AI integration in their field.
Broader significance
The consultation is a key mechanism through which the Commission will fulfil its obligations under the AI Act to provide detailed, practical guidelines for high-risk AI classification and compliance. These guidelines are expected to shape how national authorities and notified bodies interpret and enforce the AI Act’s requirements.
The AI Act’s risk-based approach is designed to be proportionate and flexible, but its effectiveness depends on clear, workable definitions and consistent application across sectors and Member States. The consultation is therefore central to ensuring that the AI Act’s objectives are achieved in practice, especially in complex and high-stakes fields such as pharma and medtech.
It is also notable that the Commission is required to review and, if necessary, amend the list of high-risk use cases in annex III and the list of prohibited practices in article 5 on an annual basis, ensuring that the regulatory framework remains responsive to technological and societal developments.
Osborne Clarke comment
The Commission’s targeted stakeholder consultation offers life sciences organisations an opportunity to contribute to the development of guidelines that will determine how AI systems are classified and regulated under the AI Act.
Engagement with the consultation is not mandatory, but it can help businesses to anticipate regulatory developments and support a proportionate, innovation-friendly compliance environment.
Companies operating in the biotech, medtech and digital health sector may wish to review their AI portfolios in light of the AI Act's classification criteria, and consider submitting practical examples or requests for clarification when the risk status is unclear or when existing sectoral regulation already provides robust safeguards.
Businesses may find it helpful to monitor the consultation’s progress and prepare for the publication of the Commission’s guidelines in early 2026, as these will set the compliance expectations ahead of the AI Act’s high-risk obligations entering into force in August 2026.