Artificial intelligence

High-risk AI systems in life sciences face strict regulatory scrutiny from new EU rules

Published on 2nd May 2024

Businesses should consider if and how AI is used internally or externally and get ready for the risk-based approach

The risk categorisation of artificial intelligence (AI) systems can ensure its regulation is proportionate and effective, with the intensity and scope of the risks that AI systems can generate also determining the type and content of the applicable rules.

Unacceptable risks posed by AI would lead to a complete ban under the new EU AI Act, which gained approval from the European Parliament in March, awaits the Council of the EU's final approval and marks a significant legislative development for the life sciences and healthcare industry.

As an example of unacceptable risks, some AI systems are intended to evaluate or classify individuals or groups over a period of time based on either their social behaviour or known, inferred or predicted personal or personality characteristics. Those systems are banned when the social score leads to detrimental or unfavourable treatments and prohibited AI practices.

Regulatory responses

High risks do not preclude an AI system being marketed in the EU, but they do trigger a strict regulatory regime. All operators throughout the supply chain are concerned in this regulatory scope: providers, product manufacturers, deployers, authorised representatives, importers and distributors.

All other AI systems – that is, those that are not deemed unacceptable and do not meet the high-risk threshold – fall into a third, catch-all category. This final group of AI systems is extensive in scope. It is also subject to regulation under the EU AI Act, but the approach applied typically depends on the AI system's intended purpose. Despite being occasionally described as "lower risk" AI, the regulation does not explicitly outline or discuss concepts such as low or medium risks.

General-purpose AI models are classified separately under the regulation. A key criterion is whether the model poses a systemic risk, such as serious consequences to public health and safety.

'High-risk' trigger

AI systems qualify as high risk in two situations: either on account of being connected to a regulated product subject to third-party conformity assessment under European laws – for example, machinery, personal protective equipment or medical device – or because they are explicitly listed in the EU AI Act.

In the first instance, an AI system may be considered highrisk because it is already regulated as an industrial product under the EU's new legislative framework (NLF). It will also be deemed high risk if it is solely intended to be used as a safety component of an industrial product.

There are limitations to this classification. Firstly, the EU AI Act provides a limitative list of NLF directives and regulations that trigger this requirement, in an annex one to the regulation. All NLF legal instruments are not incorporated in the annex. Notably absent from the list is directive 2014/35/EU on low-voltage electrical equipment.

Secondly, in either configuration – that is, AI as a product or AI as a safety component – the regulatory requirements specific to high-risk AI systems solely apply if the industrial product is required to undergo a third-party conformity assessment under the NLF. This excludes situations where Union law allows self-assessments of conformity without any recourse to an external stakeholder; for example, in the case of class I medical devices.

Risk list

In the second instance, the high-risk determination is made on the basis of a list of AI systems provided in the regulation's annex three that is aptly titled "High-risk AI systems referred to in Article 6(2)".

Nonetheless, the regulation permits that providers of AI systems, which are referred to in annex three, do not follow the high-risk regulatory regime if they conduct and document an appropriate assessment. For this to occur, it must be determined that the system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. This could be the case if a provider can evidence that the AI system is intended to perform a narrow procedural task or to improve the result of a previously completed human activity.

The inventory of high-risk AI practices is not set in stone. The European Commission may add or modify the annex three use-cases of high-risk AI systems via delegated acts but upon conditions.

To assist operators, the Commission must adopt guidelines specifying the practical implementation of classification by18 months after the regulation's entry into force. Comprehensive examples of use cases of AI systems that are high risk and not high risk are expected as part of the guidance.

Pharma and medical devices

In the devices and in vitro diagnostics industry, any company involved in medical technologies with a digital element should consider the EU AI Act and understand its scope.

Analogue – that is, non-digital – medical devices and in vitro diagnostics are also within the purview, if AI is a safety component of any of those products. For AI to be considered a product's safety component, it is sufficient that failure or malfunctioning of the AI component endangers the health and safety of persons or property – the AI component does not necessarily have to fulfil a specific safety function for the product.

An impact assessment of the regulation should be conducted even if the AI system operates remotely, without physical integration. Both embedded and non-embedded AI, which serve the functionality of a medical device, are subject to regulation.

Beyond medtech, all businesses should screen their existing processes, including those that do not have any medical devices, in vitro diagnostics or other industrial products in their portfolio – for example, biotech or pharmaceutical companies.

The EU AI Act does not limit itself to providers or manufacturers of high-risk AI systems: users (so-called "deployers") are also in scope. A deployer can be any natural or legal person using an AI system under its authority, except when the AI system is used in the course of a personal non-professional activity.

All aspects of a life sciences' business should be considered: production, conception and design; logistic processes, such as storage, transport and supply; and sales and procurement. AI can be deployed in connection with any of those operations, through industrial products (such as, typically, machinery but also pressure equipment or radio equipment) listed in annex three to the regulation.

Annex-specified high risks

Additionally, there are multiple concrete applications of AI systems deemed high risk in the life sciences industry and in terms of annex one.

Examples include remote AI biometric identification systems. These are biometric access control systems commonly used in the industry to secure sensitive areas such as laboratories, manufacturing facilities and storage areas where controlled substances or valuable research materials are kept. Comparable controls are put in place within hospitals or care centres.

AI biometric categorisation systems are another example. Certain digitised clinical activities such as automated study participants' triage or selection tools, could be listed in this category, as could remote patient monitoring algorithms that collect or analyse biometric data, such as heart rate, blood pressure, or temperature. AI biometric categorisation systems can also assist in monitoring patient adherence to prescribed treatments.

All general AI applications should be inventoried by life sciences businesses beyond health products or sector-specific workflows.

Annex three also incorporates AI systems intended to be used for the recruitment or selection of natural persons; this will impact life sciences businesses using AI, for example, to filter job applications or to evaluate candidates. Procurement, legal, finance, accounting, IT, quality, regulatory and clinical departments within pharma and medical device businesses may be deploying AI to carry out their task, in the form of AI systems with varying level of risks.

Osborne Clarke comment

Digital health or medtech businesses marketing AI-powered products are not the only areas of the sector that are concerned by the regulation. All life sciences companies should consider whether they are using AI tools internally or externally and how, as the EU AI act also applies to deployers of AI systems. AI mapping against the EU AI Act's requirements is a priority to identify the level of risk attached to the AI system – for example, high-risk or not. This will determine the applicable regulatory regime.

This Insight series on the EU AI Act explores over the coming months its implications for businesses active across bio-pharmaceuticals, medical devices and in vitro diagnostics. Coverage will include AI supply chains, product logistics, research and development, SMEs, compliance monitoring, liability, and more. The next Insight in the series will focus on new CE marking in healthcare.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?