Artificial intelligence

The EU's AI Act: what do we know so far about the agreed text?

Published on 12th Dec 2023

While considerable work remains on its detail, there is political agreement on the broad shape of the legislation

Green code on smartphone and laptop screens

Late on Friday 8 December, negotiators for the European Parliament and Council of the EU reached political agreement on the shape and contents of the ground-breaking EU regulation on artificial intelligence – the AI Act – after 38 hours of discussion over three days. However, this represents the beginning of the end of the legislative process, rather than the end itself.

The agreed text is not yet available and does not yet exist in its final form – a great deal of additional work is needed on the technical details over the coming weeks. Several last-minute suggestions are understood to have been made in the negotiations, increasing uncertainty about the final document. Nevertheless, having an understanding of the current position will enable businesses to kickstart their AI compliance projects.

We have gathered below what is known so far from the press releases from the Parliament and Council, supplemented by press reports and posts by those who were in the room or close by. The overall shape of the AI Act, with a tiered, risk-based approach has not changed from the original April 2021 proposal from the European Commission, but there are some significant changes and additions, outlined below.

Definition and scope

The definition of AI in the final text is intended to distinguish AI from "simpler systems" and is said to be aligned with the Organisation for Economic Co-operation and Development (OECD) definition of AI.

This OECD definition is: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

The legislation will not have an impact on Member States' competences in the area of national security and will not apply to systems that are exclusively used for defence or military applications (although dual-use technology would fall within the AI Act). It will not apply to systems used exclusively for research and innovation, or for non-professional purposes.

Prohibited AI

The tiered, risk-based approach of the proposed AI Act has been confirmed. What to include in the banned category, considered to pose unacceptable risk to safety and fundamental rights, was a particularly controversial part of the negotiations, with the Parliament wanting a much longer list of banned AI than the Council, whose constituent Member State governments wanted more freedom to use AI for national security and law enforcement applications.

Banned applications are reported to include:

  • facial recognition systems in publicly accessible spaces for law enforcement, but with exceptions (discussed below);
  • cognitive behavioural manipulation "to circumvent … free will" (as the Parliament's press release puts it);
  • emotion recognition in the workplace or educational settings (although there is apparently a carve-out for safety systems such as recognising whether a driver has fallen asleep);
  • social scoring based on behaviour or personal characteristics;
  • AI that exploits vulnerabilities such as age, disability, social or economic circumstances;
  • biometric categorisation around sensitive characteristics (including political views, sexual orientation, religious or philosophical belief and race); and
  • some types of predictive policing.

Untargeted scraping of facial images from the internet or CCTV to create facial recognition databases is also stated to be prohibited.

Facial recognition and other law enforcement applications

The compromise around facial recognition systems illustrates how agreement in areas relating to law enforcement was particularly difficult to achieve.

The use of remote biometric identification (RBI) systems – also known as automated facial recognition (AFR) systems – will be prohibited in publicly accessible spaces unless with prior judicial  authorisation and for specific crimes. For specific law enforcement purposes such as preventing imminent terrorist threat or investigating serious crimes, certain uses of RBI or AFR are likely to be permissible, some in real time and some after the event, and each subject to stringent conditions.

High-risk AI

The second tier of regulation concerns AI applications that are considered to pose a high level of risk to safety or to fundamental rights. The most significant change to the Commission's original proposal is that, as agreed in earlier phases of the negotiations, there will be a carve-out for AI that falls within the specified high-risk categories but which does not, in fact, pose a significant risk to safety or fundamental rights.

At this stage there is not much detail available on the final list of high-risk AI categories although changes are likely to concern the detail of the original Annex III proposals, rather than to the fundamentals of the type of applications covered, which include education, employment and recruitment, critical infrastructure, access to public and private services (the latter including credit-scoring), law enforcement, border control and administration of justice. A proposal to include recommender systems within social media deemed "systemic" under the Digital Services Act was rejected.

As originally proposed, high-risk AI applications that pose a significant risk will be subject to a raft of obligations, certification of compliance, registration and application of the CE mark (as the Commission's proposed AI regulatory framework detailed in 2021). The final negotiations added clarity and adjustments to make them more technically feasible and less burdensome (obligations around the quality of training data are said to have been amended in this way, as well as the requirements around technical documentation for small to medium-sized enterprises (SMEs)).

A requirement to undertake a "fundamental rights impact assessment" has been added. Limited information is available on this point, although we currently understand that it will not be a universal requirement. It is reported to apply to public bodies and private entities "providing essential public services such as hospitals, schools, banks and assurance companies deploying high-risk systems". We await more detail.

A right to launch a complaint about an AI system has been created, as well as a right to receive an explanation about a decision taken by a high-risk AI system that impacts on the complainant's rights. 

'General purpose AI' and foundation models

Supplementing the official press releases with reports and posts on social media – all of which carry warnings that full details are awaited – provides a sense of how the AI Act will regulate "general purpose AI".

The AI Act will regulate "general purpose AI models", which are trained on a large amount of data, are able to perform a wide range of distinct tasks and can be integrated into a variety of downstream AI. Based on leaked drafts of what appears to be the near-final text, the phrase "general purpose AI" seems to be used in place of the phrase "foundation models", but the detailed text is yet to emerge. Generative AI – such as the GPT model that ChatGPT is built on – is an example of general purpose AI.  

The AI Act will create two tiers of obligation for general purpose AI. This set of obligations is distinct from the core risk-based tiers of regulation in the AI Act. Also new is the AI Office, which will be created within the Commission to centralise oversight of general purpose AI models at EU level.

Tier one: all general purpose AI

Providers of general purpose AI models will be subject to a set of obligations including maintaining technical documentation and providing sufficient information about their model so that downstream providers that incorporate it into their system can comply with their own AI Act obligations. General purpose AI model providers will also be required to have a policy concerning the respect of EU copyright rules, particularly to ensure that, where copyright holders have opted out of allowing their data to be available for text and data mining (including web-scraping), this is identified and respected. Finally, providers must prepare and publish a statement about the data used to train the general purpose AI model. Open-source general purpose AI is believed to be exempt from the requirements for documentation and downstream information but must have a copyright policy and provide information about training data.

Tier two: high impact and systemic risk

A second tier of obligations, in addition to those imposed on all general purpose AI, applies to "high impact" general purpose AI models that are considered to pose a "systemic risk". The Council press release describes these as models that are "trained with a large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain".

A model will be presumed to pose a systemic risk where the cumulative amount of computing power used to train it was greater than 10^25 floating point operations. This is a difficult concept to put into simple language; but it is a vast scale, which will likely capture the latest iterations of generative AI models.

It appears that the AI Office will also be able to designate a general purpose AI system as having systemic risk, based on considerations including the number of parameters in the model, the size of its training data set, characteristics of how it functions and its number of users. We understand that open-source models with systemic risk will remain subject to the full regulatory regime. 

In addition to the base-level transparency obligations, providers of general purpose AI models with systemic risk will also be required to undertake model evaluation, including adversarial testing, to assess and mitigate possible systemic risk at EU level, to monitor and report on serious incidents, to ensure adequate cybersecurity for the model and its physical infrastructure, and to monitor and report on the energy consumption of the model.

Codes of practice will be developed around these requirements pending the development of harmonised standards, which will carry a presumption of AI Act compliance if they are adhered to.

Open-source AI

Free and open-source AI is believed to be partly outside the scope of the regulation, although the position is somewhat unclear. It appears that aspects of the regulation may apply where open-source systems are within the categories of high-risk or prohibited AI, or could cause manipulation. In addition, open-source systems within the category of general purpose AI are understood to fall within the scope of the AI Act. Obligations will be reduced, unless the system is considered to be high impact posing systemic risk.

Enforcement and penalties

Although oversight of general purpose AI will be centralised in the Commission's AI Office it appears that enforcement of the AI Act will otherwise be undertaken by authorities designated at Member State level. Coordination and coherence of the regime will be the responsibility of an AI Board, separate to the Commission's AI Office and comprising representatives from Member States. The AI Board will be supported by an advisory forum comprised of a range of stakeholders.

Penalties for breach of the prohibitions on specified forms of AI have been increased from the Commission's proposal of a maximum of 6 per cent of worldwide annual group turnover to 7 per cent (or €35 million, if higher). It is not clear whether the maximum penalties in relation to data governance (also proposed at 6 per cent) have also been increased.

Penalties for other substantive breaches including the high-risk obligations are reported to have been reduced from a maximum of 4 to 3 per cent of worldwide annual group turnover (or €15 million, if higher). We understand that penalties for breach of the general purpose AI obligations are also set at this level.

Maximum penalties for the supply of incorrect information will be 1.5 per cent or €7.5 million.

Reduced caps have been agreed for SMEs and start-ups.

Timings for compliance

A considerable amount of detailed technical work remains to be done on the draft legislation. Given the scale of this further work, plus the formalities of formal adoption by the Parliament and the Council, legal checks on the drafting and translation into all of the official EU languages, we anticipate that the final legislation will be published in the EU's official journal in around five to six months' time.

It will become law 20 days after its publication in the Official Journal – which is likely to be summer 2024.

The provisions of the AI Act will come into force progressively over the next two years, balancing time for compliance with the political desire to deal with AI safety. As we currently understand it:

  • The prohibitions on specified categories of banned AI will come into force after six months – late 2024.
  • The provisions for high impact general purpose AI with systemic risk and the provisions on the obligations on high-risk AI will come into force after 12 months – summer 2025.
  • Provisions dealing with governance and conformity bodies will come into force after 12 months – summer 2025.
  • All other provisions will come into force after 2 years – summer 2026.

Osborne Clarke comment

The summary above comes with the caveat that it is based on reports and leaks from the negotiations, not on first-hand sight of the text – which indeed will not be finished for some time. The official line is that the remaining work is mainly in relation to the descriptive recitals that precede the operative provisions of the legislation. Other commentators are observing that "details" in areas such as definitions, exceptions or the scope of provisions can significantly shape the final law.

Commentators have indicated that the first "full" text may not be ready for discussion within the Council before the end of January, or possibly later. We understand that the technical drafting must be complete by the end of February if there is to be sufficient time remaining for the Parliament to adopt the text before the current session ends ahead of the elections in early June.

Nevertheless, understanding the broad direction that has been agreed is clearly valuable. In particular, knowing the staggered compliance deadlines described above is important for businesses wishing to begin their planning.

All companies should – once a reliable text is available – assess the AI Act's impact on their business. Particularly important initial considerations will be:

  • The extent to which the business is involved in the development of general purpose AI systems. General purpose AI developers will need to consider whether their systems are likely to be classified as "high impact" with systemic risk.
  • Whether the business develops, provides or uses systems that will be prohibited.
  • Whether the business develops, provides or uses high-risk AI systems.
  • Because the AI Act imposes obligations on entities that "place AI systems on the market" and applies to various parties at various points in the AI supply chain, where the business fits into that framework of obligations along the supply chain of planned or existing AI systems, products or services.

In each of these cases, it will likely be necessary to establish an AI governance programme and processes to ensure compliance in good time for the relevant deadlines.

As with all technology, compliance may well have a technical aspect. Our enduring recommendation is to bring these regulatory considerations into technical development discussions at the earliest opportunity, since retrofitting compliance into technical architecture can be both expensive and disruptive.

If you would like to discuss any of the issues in this article, please contact the authors or your usual Osborne Clarke contact.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?