Artificial intelligence

New AI legislation's reach extends into European healthcare

Published on 24th Apr 2024

What impact could the sector-neutral EU AI Act have on public health and patient care across Europe?

Molecular formulas on blue background

Compliance with Europe's upcoming regulation laying down harmonised rules on artificial intelligence (AI) is essential for life sciences and healthcare companies relying on the rapidly developing transformative technology.

The EU AI Act, which gained approval from the European Parliament in March and awaits final approval from the Council of the EU, is set to have a significant impact for life sciences and healthcare companies as they utilise AI across production or pre-launch tasks, clinical studies, post-marketing activities and day-to-day operations such as recruitment, procurement and legal affairs.

Citizens' health

The regulation is enacted on the premise that AI systems can potentially pose risks to the health and safety of individuals, including patients and their healthcare professionals – citizens' health and safety are cornerstones of the EU AI Act.

The new AI regulation considers it crucial to prevent and mitigate safety risks posed by a product's digital components, including AI systems. This is in line with Union harmonisation legislation that aims to enable product movement in the internal market and to ensure that only safe and compliant products enter the market.

Autonomous robots in manufacturing and personal care, along with advanced diagnostic systems in healthcare, adhere to the principles of the regulation and must operate safely and reliably in complex environments to ensure safety and accuracy in critical tasks.

The extent of a potential adverse impact caused by an AI system on health, safety and fundamental rights is of particular relevance when classified as "high risk". These high-risk AI systems either meet general, product-related criteria or are specifically identified in the regulation.

The European Commission is empowered to adopt delegated acts by adding or modifying use-cases of high-risk AI systems to the regulation. As part of this process, an assessment is made as to the severity of the harm that an AI system can cause to the health and safety of persons or their fundamental rights.

Conversely, the EU AI Act allows that certain AI systems are not regarded as high risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of persons and meet a number of conditions; for example, they are solely intended to perform a narrow procedural task.

Pivotal health and safety

Beyond their relevance to classify AI as high-risk, the health and safety of citizens are pivotal to multiple regulatory obligations under the EU AI Act.

For example, any high-risk AI system's risk management system must be based on an identification and analysis of known and reasonably foreseeable risks the system can pose to health, safety or fundamental rights when used in accordance with its intended purpose.

Instructions for use of high-risk AI systems should cover potential risks to health, safety, and fundamental rights arising from foreseeable uses – that is, in line with the system's intended purpose – and reasonably foreseeable misuses.

Persons affected by decisions made with certain high-risk AI systems have the right to receive clear explanations regarding how the AI system contributed to the decision made by the system's deployer and its potential adverse impacts on their health, safety, or fundamental rights.

Medical care

The EU AI Act delineates a distinct boundary between its regulatory scope and AI-driven medical practices within Member States, in an intent to ensure that the regulation does not impede treatments at the national level.

Certain AI systems that materially distort human behaviour, causing significant harms, such as adverse impacts on physical, psychological health or financial interests, are prohibited practices under the EU AI Act. The regulation is, however, clear in that the ban on manipulative and exploitative practices should not affect lawful practices such as those in the field of healthcare.

Examples include medical treatment, such as psychological treatment of a mental disease or physical rehabilitation, so long as those practices are carried out in accordance with the applicable law and medical standards; for example, seeking an individual's consent or that of its legal representatives.

In the same vein, the placing on the market, putting into service or the use of AI systems intended to detect the emotional state of individuals in situations related to the workplace and education are also forbidden. The scientific validity of these systems is hindered by cultural and situational variations, which lead to unreliable outcomes.

The use of biometric data for interpretations also raises concerns around discrimination and rights infringement, notably in work or educational settings where power imbalances may exacerbate biased treatment. However, the regulation's ban does not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutical use.

AI for all: D&I

Inclusive and diverse design and development of AI systems, as well as their use and access, is another cornerstone of the regulation. Attention to vulnerable persons and accessibility to persons with disability are essential aspects of the EU AI Act.

The new regulation makes it a priority to ensure full and equal access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It embeds this as a legal requirement for providers of high-risk AI systems. Compliance with accessibility requirements should occur by design to ensure necessary measures are integrated as much as possible from the outset. The regulation refers to directive (EU) 2016/2102 and directive (EU) 2019/882.

The EU AI Act further acknowledges and follows the ethics guidelines for trustworthy AI developed by the independent high-level expert group on AI appointed by the European Commission. The ethical principles identified by the expert group are intended to help ensure that the technology is trustworthy and ethically sound.

Among guidelines criteria, diversity, non-discrimination and fairness are essential. This means that AI systems should be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by national or EU laws. Another key criterion is geared towards assessing and preventing the negative impact of AI on vulnerable persons, including as regards accessibility for persons with a disability, and gender equality.

The diversity of AI development teams, including gender balance, is also essential. Obligations are placed upon the European Artificial Intelligence Office and Member States to ensure diversity and inclusion (D&I) aims are met. They are in charge of facilitating the drawing up of codes of conduct concerning the voluntary application of requirements to all AI systems. Among drafting criteria for voluntary codes, inclusive and diverse design of AI systems is also vital. This can be achieved by establishing diverse and inclusive development teams and promoting stakeholders’ participation in that process.

The placing on the market, putting into service or using an AI system that exploits vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation are not permitted and are forbidden practices. The ban applies where the objective or the effect pursued is to materially distort the behaviour of that person (or a person belonging to that group) in a manner that causes or is reasonably likely to cause significant harm to that person or another one.

Osborne Clarke comment

The EU AI Act is a cross-sector piece of legislation, but its impact on the healthcare industry is without precedent.

Adverse effects on citizens' health and their safety are carefully considered under the new law. Multiple regulatory requirements are crafted around these considerations to ensure users – including patients and healthcare professionals – are duly informed and protected. Criteria for forbidden AI practices are crafted with a specific attention to therapeutic and medicinal care, to ensure patients can continue to be treated in line with appropriate medical standards. D&I, including as regards persons with disabilities, is fundamental to the regulation.

Our Osborne Clarke series on AI is focused on the effect of the EU AI Act on the healthcare and life sciences industries. In the next piece, we will consider the regulation's risk-based approach, with a focus on (bio)pharmaceuticals, medical devices and in vitro diagnostics.

The series over the coming months will also cover essential AI applications in life sciences, the regulation's extension into healthcare, AI supply chains, the complexities faced by manufacturers involved in AI, and more.

Follow
Interested in hearing more from Osborne Clarke?

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?