IT and data

Europe opens decider exchanges on the future of regulation of AI systems

Published on 20th Aug 2021

The European Commission has made its 'first serve' on the scope of its proposed Artificial Intelligence Act

OC_KI_02_007

For (German) tennis fans, 2021 will be marked by Alexander Zverev’s gold medal in the Tokyo Olympics tennis tournament, his victory over the world number one Novak Djokovic in the quarter-finals round and triumph in the face of Karen Khachanov’s fierce serving and tenacious rallies in the final; while, for observers of legislative developments in the arena of technology, the European Commission has carried out its own first regulatory "serve" for Europe-wide uniform rules in the field of intelligent software systems .

To take the tennis analogy a little further: the serve is only the beginning of a rally. And, in terms of the regulation of artificial intelligence (AI) systems, it is important too for Europe as an economic area to return the ball early and start a rally – that is, to enter discussions.

The Commission proposed a new regulatory framework for AI on 21 April, and the draft legislation creates a pyramid of risk-based regulation for AI in the European Union. This has come amid wide-ranging debate around the legal and regulatory implications of the advance of AI, from ethical, societal and discrimination issues to the importance of data and questions of liability. So what will be the scope of application of this proposed new framework for AI in the EU?

Where's the outcry?

Although there have been murmurs around the international business community and among onlookers in recent months about the Commission's proposal in April to regulate artificial intelligence and set a generally binding, cross-sector standard, a big outcry has failed so far to materialise.

Experience of the legislative process for the General Data Protection Regulation has shown that there will be numerous changes in the course of the consultation process with the European Parliament and the Council of the EU – and this will form a part of their "return game" and subsequent rally.

Which markets does the regulation apply to?

The AI Regulation will be applicable to any EU markets in which providers develop AI systems, importers import AI systems, sellers offer AI systems or other companies offer and use AI systems.

However, certain economic sectors will be subject to exemptions (as laid out in Article 2 No. 2-4 of the proposed AI Regulation). For example, it will not apply to the automotive industry if an EU type approval is required for motor vehicles using an AI system (see Article 2 lit. f of the AI Regulation ).

In other sectors, an economic link to the European Economic Area (EEA), and the use of the output of an AI system in the EEA is sufficient to fall within the scope of the AI Regulation – although this may not be as simple as it sounds, as discussed below.

Which systems will it apply to?

Article 3(1) AI Regulation defines AI systems as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with“.

Although the term "AI system" is broadly defined, the definition must be read in the context of the application scenarios. The draft AI Regulation defines the application scenarios – "prohibited", "high risk", "medium risk" and "low (no) risk" – very specifically. For example, recommendation algorithms or search engine algorithms clearly fall within the definition of "AI systems" but would not be subject to the regime for high risk or prohibited systems unless they also fell within those definitions.

The focus of the definition is clearly on the programming techniques used, which are defined in detail in Annex 1 of the regulation (which the Commission will have power to amend (see Article 4 )).

The following programming techniques are set out in Annex 1:

  • "Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  • Logic- and knowledge-based approaches, including knowledge representation, induc-tive (logic) programming, knowledge bases, inference and deductive engines, (sym-bolic) reasoning and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimization methods."

Clearly, not all software will fall under the scope of the regulation, but – since machine-learning methods are used in almost every area of life and often offer the greatest potential for optimisation – many software applications will be caught.

The categories of high risk and prohibited software require further refinement as various aspects are unclear in the proposed regulation. For example, under Article 5 No. 1 lit. a), AI systems are prohibited that use subliminal techniques beyond a person's consciousness to distort their behaviour in a way that causes or is likely to cause physical or psycho-logical harm to that person or another person. There are different views about the reach of this provision, particularly in relation to the ethics of media and advertising.

It might seem a stretch to include the entire area of personalised advertising and recommendation algorithms of social media within this category, but some have expressed that concern (including, for example, Jessica Heesen, the head of the Research Centre Media Ethics and Information Technology, International Centre for Ethics in the Sciences and Humanities, Eberhard Karls University of Tübingen.).

On the other hand, given the broad public discussion, as well as the transparency measures taken by platform operators and the possibility to turn off these functions, it cannot be as-sumed that users have no knowledge or awareness that they are being fed personalised advertising or news curated by algorithms. Visitors to websites are informed about cookies and personalised advertising before they enter the website. Therefore, these techniques are far from being used in the dark. In Germany, moreover, general competition and media law already prohibits "subliminal techniques" (see Section 22 (1) sentence 2 Me-dienStV).

In practice, the AI regulation's proposed subliminal techniques prohibition may be very difficult to enforce. It would be very difficult to prove that certain techniques had achieved a subconscious effect on someone and had in turn caused harm to that person or someone else.

Which businesses must comply?

The AI Regulation contains a raft of legal definitions to describe the various categories of business and people that must comply with the requirements of the regulation.

Distinctions must be made between the "provider", the "importer", the "distributor" and the "user", who are collectively referred to as the "operator". Of course, the "provider", the developer of the AI system, is at the centre of the regulation and has essential duties in the development and monitoring of the AI system (see Article 2 and Article 16): "Provider means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.“

"Providers", who design, build and place an AI system on the market, are affected by numerous rules of the AI Regulation. It is irrelevant whether the "provider" markets its software to consumers or businesses.

"Importers", who import the AI system into the EEA from a provider located in a third country, are also obliged to a certain extent to ensure the safety of use of the AI system (for example, to check whether a sufficient declaration of conformity is available).

According to Article 3 IV, "user" means "any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity". The hardware that an AI system is operated on does not matter, but rather who controls the software.

A user uses the AI system within the limits drawn by the provider, who develops the system. Therefore, software-as-a-service (SaaS) users are "users" within the meaning of this provision.

Any "user", "importer", "distributor" may become a "provider" if: (1) they make the AI system itself available for the first time in the EEA under their own name or (2) the intended use of an AI system already placed on the market is changed or (3) the AI system is substantially modified (Article 28). While the interpretation of variants (1) and (2) does not pose any problems, clarification will be needed as to when the threshold of a substantial modification of an AI system is reached. It is worth noting, moreover, that by their nature, AI machine learning systems are dynamic and may change continuously as data is passed through them – again, clarification will be needed as to whether this inherent dynamic change could, at some point, amount to a substantial modification.

Where does the regulation apply?

The corporate seat of the business or the location of its servers is not relevant if AI systems are marketed or offered for use (even free of charge) in the EEA. It is unclear how the rule of application that businesses ("providers", "users") based in a third country also fall under the scope of application if the "output" of the system is used in the Union (see Article 2 No. 1 c) ) is to be understood (for example, SaaS-providers). Some have argued that the mere "availability" and "usability" of the output by a consumer resident in the EU is maybe not sufficient as such, because according to the wording of the relevant provision, the company resident in the third country must use the "output" in the EU, and not the consumer (see: Bomhard/Merkle, RDI 2021, 276, 278). However, their argument is not supported by recital 11 of the AI Regulation: “In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union.”

As an example, recital 11 stipulates that, where company A established in the EU contracts with company B outside the EU to use outputs from B's AI system to support A's business activities in Europe, the AI system and the use of the output of the AI system will fall into the geographical scope of the regulation, if the output of B's AI system impacts natural persons in the EU.

Moreover, if data concerning European companies or European residents is used by the AI system to generate an output, the AI Regulation should apply because it possibly affects natural persons in the European Union.

In conclusion, the AI Regulation is intended to have an extra-territorial scope because as recital 11 stipulates, the third country company does not need to use the AI system in the EU or use the output itself in the EU in order to fall within the scope of the AI Regula-tion. However, this interpretation is far from being indisputable, and – of course – far from being final in wording. In particular, the offering or use of the output as a provider is missing in the legal definition of all responsible parties, such as “provider” in the sense of Article 2(1) lit. a, and is also missing in the definition of “user”, “distributor” and “importer”.

Osborne Clarke comment

The Commission's first attempt to implement AI systems in a new regulatory framework is not an "ace". How powerful the "return" of the European Council and in particular of the European Parliament will be, will depend on the will of the parliamentarians that is now taking shape and the influence of associations, business representatives and lobbyists. In the end – unlike in tennis – it is not decisive which of the (more than two) players emerges as the winner, but whether the AI Regulation strengthens Europe as a place of innovation. The debate about the pros and cons of regulating AI has only just begun – and with this to-ing and fro-ing, the AI Regulation will likely only come into force, at the earliest, in 2025.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?