Tech, Media and Comms

European Commission's AI White Paper: a new framework for liability issues

Published on 20th Feb 2020

DB_robot_arm

Ursula von der Leyen, the new president of the European Commission, had announced the adoption of a regulatory framework for artificial intelligence (AI) within 100 days, as part of the Commission’s ambition to promote "A Europe fit for the digital age".

One hundred days was obviously too short, but in its much anticipated White Paper published on 19 February 2020, the Commission has unveiled an ambitious programme intended to strengthen and consolidate a European approach to AI that both promotes a favourable ecosystem for European technology industry stakeholders and protects users. Member States, social, industry partners, civil society organisations, researchers, the public in general and other stakeholders are invited to comment on this White Paper by 19 May 2020.

Building on this consultation, the Commission will conduct an impact assessment in the last quarter of 2020 in order to determine the need for harmonisation measures within the single market. Such measures could take the form of amendments to existing legal framework or the adoption of new specific legislation. In addition, by the end of 2020, the Commission will propose to the Member States a revision of the 2018 Coordinated Plan on AI.

"Ecosystem of trust"

The White Paper's overall goal is, "based on European values, to promote the development and deployment of AI" and to "preserving the EU’s technological leadership" This is, of course, a statement made in the wider context of Europe's position in the global digital economy.

In an indirect reference to the strength of other global economies in this space, the White Paper acknowledges that, while Europe has a strong position in digitised industry and business-to-business applications, it is still relatively weak in the area of consumer platforms. The underlying aim of the White Paper is to address and redress some of this imbalance in favour of the EU.

The White Paper is focused on encouraging the development of AI within the EU by providing a secure and reliable framework for both the industry and users. Reconciling these interests is a key requirement. So for example, while the Commission is hoping to attract more than €20 billion per year of investment in AI between 2020 and 2030, it is also seeking to improve Europeans’ lives and is underpinning respect for their fundamental rights. The Commission intends to create a human-centric "ecosystem of trust" through its vision of a future regulatory framework for AI.

AI is already indirectly subject to various aspects of existing regulations including fundamental rights, consumer law, and EU product safety and liability legislation. For instance, the Regulation on the Free Flow of Data and the General Data Protection Regulation (GDPR) provide a comprehensive legal framework for the distribution and processing of non-personal and personal data respectively. AI falls naturally within the scope of these regulations as data (and data analytics) power this new digital technology.

What however is disappointing to note is the absence of any acknowledgment in the White Paper on the sometimes awkward fit between the GDPR and the application of AI, particularly in the context of deep predictive machine learning which does not lend itself easily to the consent model advocated under the regulation's Article 22 (concerning automated decision-making).

Data is critical to develop AI in products and services and the Commission is well aware that it must ensure that enterprises have access to data in all sectors and markets to foster innovation. As it states in its White Paper: “advances in computing and the increasing availability of data are … key drivers of the current upsurge of AI”.

As part of its approach in the White Paper, the Commission identifies various gaps in existing legislation that still need to be addressed to ensure against some risks which are specific to AI technologies.

The White Paper distinguishes the risks for fundamental rights, including discrimination (bias), privacy and data protection from risks for safety and the effective functioning of the liability regime. Risks for fundamental rights might result from of AI systems implementing a flawed design or from the input of biased data, leading the systems to "learn" while in operation on the basis of skewed data. Safety and liability risks might originate from flaws in the design of the AI technology or in object recognition systems leading, for instance, an autonomous car to wrongly identify an obstacle on the road and cause an accident.

While the White Paper covers all these different aspects with new proposals to further address the protection of fundamental rights and privacy, in particular in relation to systems able to collect biometric data for remote identification, the focus of this note is on the Commission's proposed approach to liability issues in relation to AI as this is the first time the question is addressed in such a comprehensive way.

Gap analysis of existing liability regimes at national and EU level

In 2018 the Commission appointed a group of independent experts, the High Level Expert Group (HLEG) on Liability and New Technologies, which published a report on Liability for AI and other emerging technologies on 21 November 2019 (the HLEG report). The Commission's White Paper relies heavily on the findings of the HLEG report.

It stresses that each Member State has different sets of rules for civil liability and that Member States' liability regimes, based on fault-based liability or strict liability, may not be sufficiently adaptable to address the unique risks linked to AI solutions (products and/or services) and do not apply consistent standards with regards to the conditions of liability and the burden of proof. Taking proof of causation as an example, it may become overly difficult to establish that the cause of harm originates from a flawed algorithm, especially if the algorithm was developed by an AI system incorporating machine learning which may be inherently opaque in nature.

While such gaps could be addressed at national level by amending existing legislation the Commission is concerned that this may result in disparate national rules, potentially fragmenting the single market and creating obstacles for companies operating or selling AI solutions within the EU. Consequently, the Commission rationalises that adopting a common European framework for AI seems necessary to support competitiveness of European companies.

With respect to the EU legislative framework, the Commission acknowledges that there are already regulations addressing product safety and liability. On the safety issue, the General Product Safety Directive ensures that only safe products are sold on the market. The Directive requires producers to inform consumers of any risks associated with the products they supply and to make sure any dangerous products present on the market can be traced so they can be removed to avoid any risks to consumers. On the liability issue, the Product Liability Directive provides rules for compensation for damage suffered by a consumer as a result of a defective product. The Directive introduces the principle of strict liability applicable to European producers.

However, challenges specific to devices incorporating AI make it more difficult to attribute liability and to claim for compensation, particularly given the multitude of actors potentially contributing to the design, functioning and operation of AI-based products or services. Some of the challenges involve the complexity of AI and its degree of autonomy and relative opacity, such as its capacity to take non pre-determined decisions within non-identical environments and to learn during operation, while its inner workings are not fully understandable to humans, the famous "black-box effect". These challenges can make it make it very difficult to identify and prove a defect in a product. In addition, as the Product Liability Directive focuses on the moment when the product is put into circulation on the market, it may not cover damage arising after an update or an upgrade is made by the producer.

As the White Paper notes, the victims of damage resulting from the use of a product or the provision of a service implementing AI are likely to "have less effective redress possibilities compared to situations where the damage is caused by traditional technologies". As pointed out in the HLEG report, "litigation for victims may become unduly burdensome or expensive".

A liability regime better adapted to AI?

The White Paper describes in a relatively detailed way what new obligations could be adopted to address the risks caused by AI, starting with preventative obligations, at design and development stage before the AI-based product or service is made available on the market, and ensuring an adequate framework to ensure effective liability mechanisms when damage has occurred.

With respect to preventative obligations, the White Paper follows the distinction made by the HLEG between high and low risks and suggests a risk-based approach, adopting additional rules for high-risk applications of AI.

High-risk applications would include risks arising in sectors such as healthcare, transport, energy, police or judiciary and when the life, health or property of the individual or of the legal entity is at risk. In addition, AI applications in biometric identification at a distance or for recruitment process would also be considered high-risk. For high-risk AI applications, in addition to keeping records relating to the development of the AI system, it is recommended prior to any commercialisation that a conformity assessment of the algorithms, data and design processes should be carried out in the EU to assess compliance with European rules. Independent testing centres could perform such assessment, and the Commission also suggests adopting a European governance structure, in the form of a permanent committee of experts providing assistance, issuing guidance and opinions.

In addition, the White Paper considers making targeted amendments to product safety and liability EU legislation, as well as adopting new AI-specific legislation to address the risks of AI and other new emerging digital technologies. A report on the safety and liability implications of AI , the Internet of Things and robotics was published alongside with the White Paper, suggesting for instance, that the definition of product in the Product Liability Directive should be "clarified to better reflect the complexity of emerging technologies", for example by taking into account the somewhat blurred line between product and services for digital applications.

Furthermore, the Commission shares the HLEG report’s view that a distinction should be made between the responsibility of the different parties that contributed to the AI-based product or service for example, between the producer of the product or service and the AI developer. The White Paper explains that risks arising from the development phase should be addressed by the developers of AI, while the relevant obligation should be placed on the person using the AI-based product or service (the deployer) for the risks occurring during the use phase. On the issue of allocating responsibilities, the HLEG report also suggested that in cases of "significant harm", strict liability should lie with the person who is most in control of the risk connected with the operation of AI and who benefits economically from its operation. As a result, in some cases, this person may be the operator of the product (for example, the provider of services using the relevant digital technology), and in other cases the backend provider (such as the producer of the device containing digital technology or another service provider updating or maintaining features available within the device) in each case profiting from data generated by such operation.

Using the autonomous car as an example, the White Paper also stresses the difficulty for users under the Product Liability Directive "to prove that there is a defect in the product, the damage that has occurred and the causal link between the two". On the issue of proving causation, the White Paper quotes the HLEG report on liability and indicates that the Commission is seeking views on the extent to which the burden of proof in such cases could be adapted or potentially reversed. The alteration of liability burdens is likely to prove highly controversial with relevant stakeholders, but it is difficult to see how such risks should otherwise be equitably shared when harm has been caused, particularly in the context of the status quo across Europe where it is very difficult, if not impossible, for legal redress to be achieved effectively.

This proposed approach, combining a set of new obligations for AI-based solutions, a definition of high-risk AI applications, a revision of the EU safety and liability directives both to adapt them and ensure a harmonised regime across the EU, together with the supervision of a European governance structure, is the focus of the White Paper consultation launched by the Commission.

Osborne Clarke comment

There is no doubt that the EU Commission sees here an opportunity to set a new regulatory gold standard for the world, as was the case with the GDPR. Obviously, the stakes are very high given the pervasive fast-growing existing use of AI today.

While the goal of ensuring product safety and effective liability mechanisms is commendable, and should serve AI-based applications through increased trust of users, there is plenty in this consultation to give "big tech" (whether EU-based or not) cause for concern. The Commission is taking a bold, first step – leading by example in much the same way as it did for data and privacy. Whether this approach achieves the stated aims of fostering innovation and creating an ecosystem of trust is clearly a challenge and an opportunity for pro-active and strategic contributions by all stakeholders.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?