Life Sciences and Healthcare

EU MDR and IVDR poised to remain main framework for medical AI in draft AI interface reforms

Published on 28th January 2026

Lex specialis treatment, integrated notified body reviews and clarified duties are designed to limit overlapping regulation of medical AI

Abstract image - smart grid city or digital networks

At a glance

  • Proposed reforms position MDR and IVDR as the primary framework for medical AI, reducing risk of duplicative conformity assessments under AI Act. 

  • Notified bodies assessing AI-enabled devices will require AI-specific competence, with integrated reviews replacing separate certification processes.

  • Clarified lex specialis treatment aims to streamline compliance while maintaining AI-specific requirements for data governance, robustness and human oversight. 

The EU Artificial Intelligence Act (AI Act) establishes horizontal rules for high risk AI systems, including those used in healthcare. Medical device and diagnostic manufacturers have been concerned that AI enabled devices could be subject to duplicative obligations under both the AI Act and the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR) regulatory framework. 

The European Commission's legislative proposal to revise both regulations released on 16 December directly addresses this by clarifying the lex specialis relationship between the two regimes, adjusting the AI Act's annex I (list of Union harmonisation legislation) to reposition MDR and IVDR references, and ensuring that AI specific requirements are implemented through the sectoral conformity assessment system rather than through separate certification.

AI Act adjustments

In the AI Act's current version gradually applicable since 1 August 2024, high-risk AI systems are subject to a specific conformity assessment leading to the CE-marking of conformity and various oversight provisions. A system is classified as "high risk" where: 

  • The AI system is intended to be used as a safety component of a product covered by EU harmonisation legislation listed in the AI Act's aforementioned annex I, or is itself such a product; and
  • That product is required to undergo third-party conformity assessment for placing on the market or putting into service under that legislation.

The Commission's proposal offers a small but impactful adjustment to the AI Act, by moving references to the MDR and IVDR from the annex I's section A (list of Union harmonisation legislation based on the New Legislative Framework) to its section B (list of other Union harmonisation legislation). If the amendment is eventually adopted, the shift indicates that the MDR and IVDR would function as the primary legal framework for AI systems that are safety components of devices and diagnostics or are themselves such products.

Implications of proposed system

High-risk AI systems covered by section B's EU legislation are subject to a light-touch regulatory regime, set out in article 2 (2) of the AI Act. The provision offers a very limited list of requirements to which such systems should abide. The rationale for this approach is that, if the proposal is passed, the MDR and IVDR framework – as amended – would offer sufficient guarantees of conformity to place high-risk medical AI systems. 

It is hoped that the parallel application of the MDR and IVDR regime and the AI Act does not lead to overlap and stifle innovation and to simplify the regulatory framework for AI-enabled device. However, the proposed system comes with a few caveats. 

First, the Commission would be legally empowered under the MDR-IVDR proposal to use its implementing and delegated powers under the regulations and lay down specific requirements regarding AI. If adopted, those Commission-enacted requirements will have to take into account some of the specific AI Act's provisions for high-risk AI systems. 

Second, notified bodies that are designated to assess high-risk AI systems falling under the MDR or the IVDR should also meet specific AI-related requirements set out in the AI Act's article 31 (requirements relating to notified bodies).  

Third, the proposal places an explicit duty upon MDR and IVDR competent authorities to cooperate with AI Act-designated market surveillance authorities of their Member State, with respect to all devices that qualify as high-risk AI systems.

Towards a single assessment

For manufacturers, a benefit of the clarified interface is that a single conformity assessment under the MDR or IVDR could, in principle, suffice to demonstrate compliance with both device regulations and AI Act requirements for medical AI systems, provided that the technical documentation and quality management system adequately address AI risks.

Practically, this could mean that the regulations' technical documentation may need to include AI specific elements, such as descriptions of model architecture and training procedures, data governance policies, robustness and stress testing outcome, and post market monitoring plans tailored to the AI system. However, as these elements are not expressly prescribed in the current proposal, their precise scope and content would depend on any future Commission acts adopted under the amended regulations.

Manufacturers may also be expected to document how human oversight is built into their AI devices, consistent with AI Act requirements. For example, (digital) instructions for use and training materials might have to be particularly clear as to the intended role of clinicians in interpreting AI outputs and the system's limitations. The extent of such documentation requirements would ultimately be shaped by the final legislative text and any implementing or delegated acts the Commission may adopt.

Notified bodies designation

The AI Act envisages that certain notified bodies may be designated to perform conformity assessment for high risk AI systems in accordance with article 31 of the AI Act. In the devices and IVDs context, the MDR and IVDR remain the designation basis, but notified bodies certifying AI enabled devices will need to demonstrate competence in assessing AI related aspects.

The Commission's proposal does not spell out specific AI competence criteria. However, the combination of article 35 (authorities responsible for notified bodies) in both the MDR and IVDR, annex VII (requirements to be met by notified bodies) to both regulations, combined with the AI Act's article 31, implies that authorities responsible for notified bodies may have to ensure that their own staff or contractors have adequate expertise in software, AI, data science and cybersecurity.

The joint assessment teams, which the proposal introduces and would include representatives from a national authority and two experts appointed by the Commission in consultation with the medical device coordination group, will likely include AI experts when evaluating notified bodies' capabilities in this area.

Complementary nature of AI Act obligations 

While the MDR and IVDR will be the primary framework, some specific obligations under the regulations and their draft amendments offer key requirements that are of interest to AI manufacturers, particularly regarding transparency, human oversight, robustness, data governance and quality management.

For example, the enhanced cybersecurity requirements in the amended regulations' annex I (general safety and performance requirements) for software and connected devices implicitly address some of the AI Act's concerns about resilience against manipulation and unauthorised access. For AI-enabled devices, these cyber annex I requirements offer standards addressing security and resilience, given that adversarial attacks or data poisoning can potentially compromise AI performance and patient safety. 

The proposal further strengthens these obligations through new reporting requirements under articles 87a (MDR) and 82a (IVDR) for actively exploited vulnerabilities and severe incidents of connected devices. Manufacturers are mandated by the proposal to integrate cyber risk management into their (AI system) design, ensuring that training and validation data are protected, model parameters are secured, and update mechanisms cannot be easily subverted.

The MDR and IVDR requirements on clinical evaluation, performance evaluation and post-market follow-up require manufacturers to define and justify the level of clinical evidence and to continuously update their evaluations based on new data. For AI-enabled devices, manufacturers would be typically encouraged to pay special attention to the composition and governance of datasets underpinning clinical evidence, including bias, coverage of relevant subpopulations and ongoing performance monitoring under real-world conditions.

Distinguishing product categories

The lex specialis arrangement does not mean that all AI used in healthcare falls under the MDR or IVDR. 

Standalone AI systems that do not meet the definition of a medical device or IVD, but are used in health related contexts – for example, AI chatbots providing lifestyle advice, or hospital capacity planning tools – will remain subject primarily to the AI Act. It is the manufacturer's responsibility to determine which regulation applies to its products and, as far as the AI Act is concerned, to document whether the technology could evade the 'high-risk' status.

The regulatory status of borderline products can be clarified via the proposal's codified Helsinki procedure, with expert panel opinions and, if necessary, Commission implementing acts determining whether a given AI product is a device or IVD and how it should be classified.

Osborne Clarke comment

The proposed clarification of the MDR-IVDR and AI Act interface could represent a significant step towards greater legal certainty for medical AI developers, if adopted in its current form. Confirming the MDR and IVDR as lex specialis would reduce the risk of overlapping audits and potentially conflicting requirements, while the integration of AI related expectations into existing sectoral processes could leverage the strengths of the device regulatory model.

However, the practical effect will depend heavily on how notified bodies – and the Commission in upcoming legislative acts – interpret and apply these integrated requirements. Manufacturers may wish to anticipate that AI related scrutiny could intensify and may consider proactively enhancing their internal governance of AI system development, data use and post market monitoring. Early, structured engagement with notified bodies on AI specific documentation expectations could help manage expectations during the assessment process, though the precise scope of such requirements will depend on the final legislative text.

Maintaining flexibility in technical and regulatory strategies, and monitoring EU level developments closely, may be advisable for companies seeking to maintain compliance while leveraging the full potential of AI in their digital health technologies. As the proposal remains subject to negotiation between the European Parliament and the Council, stakeholders should avoid drawing definitive conclusions until the legislative process is complete.

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?