Dispute resolution

EU to finalise the world's first law on artificial intelligence

Published on 27th Nov 2023

By the end of the year, a new regulation will be implemented to classify artificial intelligence systems based on their risk to humans

Autonomous drive start button

In April 2021, the European Commission significantly shifted its approach to AI regulation. Instead of relying on "soft" law, such as non-binding guidelines and other recommendations, it proposed creating a comprehensive regulatory framework for AI. 

Two years later, the world will witness the emergence of the first-ever legal framework for artificial intelligence. The European Parliament has announced that a new regulation is set to be adopted by the end of this year. The regulation, which aims to standardise the legal structure for developing, establishing and using AI products and services across the European Union, will impose varying obligations on providers and users of AI systems.

Aims of the regulation

The new EU regulation will significantly affect companies that provide artificial intelligence systems in the EU, as well as those located in a non-EU country that distribute or use these systems within the EU.

The regulation aims to guarantee that AI systems used in the EU are trustworthy, transparent, verifiable, free from bias and eco-friendly. 

Risk segmentation 

The European regulation is primarily concerned with assessing the level of risk that an artificial intelligence system may pose to humans. Based on this risk, the systems will be categorised, and their providers and even users will be obliged to adhere to regulations. 

The regulation will classify systems that pose a risk to humans as an unacceptable threat and will, therefore, be prohibited. For instance, a voice-activated toy that manipulates the behaviour of vulnerable individuals or groups and encourages them to engage in dangerous activities would be banned. Similarly, systems that offer social ratings and categorise individuals according to their socio-economic status or personal characteristics will also be prohibited.

The remaining systems will be classified into three categories based on their risk level: elevated risk, limited risk and minimal risk.

Certain systems can potentially affect an individual's security or fundamental rights, and such systems are classified as having an elevated risk. These systems can further be divided into two sub-categories. The first sub-category includes AI systems used in products governed by EU product safety regulations, such as medical devices and toys. The second sub-category comprises eight specific areas, which include AI systems used in law enforcement and legal interpretation.

Companies that provide high-risk artificial intelligence systems will have to register their systems in a database in the European Union before they can be deployed. In addition, providers and users of these systems will be required to comply with various regulations relating to risk management, technical soundness, transparency, data protection training, cybersecurity, and other related areas.

Generative artificial intelligence systems, such as ChatGPT, or those that create and manipulate image, audio, or video content, will be classified as a minimal risk. To ensure transparency, these systems must comply with specific regulations, including disclosing when AI has generated content, implementing measures to prevent the generation of illegal content and publishing summaries of the copyrighted data used to train the system.

Systems with minimal or low risk will not be required to comply with additional regulations. Nonetheless, providers of such systems will be encouraged to voluntarily implement the obligations required for high-risk systems by promoting codes of conduct.

Supervision and enforcement: fines of up to €30 million

Every Member State will be required to appoint a competent authority, which will include a national supervisory authority, to regulate the use of artificial intelligence. Additionally, a European Artificial Intelligence Board of representatives from all twenty-seven countries and the Commission will be established. 

Operators will be able to seek guidance from the national authorities to ensure that they meet their obligations regarding artificial intelligence systems. The national authorities will also be responsible for enforcing corrective measures to prevent, limit, withdraw or cancel the use of AI systems that may endanger the health or safety of individuals or their fundamental rights, even if such systems comply with existing laws. To ensure compliance, administrative fines of up to €30 million or 6% of annual worldwide turnover, whichever is greater, may be imposed on the entity found to be in violation.

Osborne Clarke comment

The European Union has taken a leading role in regulating artificial intelligence. By the end of this year, the proposed regulatory framework will aim to balance the advancement of AI technology and innovation with the protection of fundamental human rights and citizen security.


* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?