Digitalisation

Proposal for a Directive on Artificial Intelligence Liability – Facilitations for Victims and Obligations to Disclose Documentation

Published on 7th Oct 2022

On 28 September 2022, the European Commission published a proposal for a Directive on Artificial Intelligence Liability (exact title: “Directive on adopting non-contractual civil liability rules to artificial intelligence” (AI Liability Directive)). The new regulations are to ensure that victims of AI-enabled products or services benefit from the same protection standards that would apply in “normal” circumstances, that is when artificial intelligence is not involved.

The provisions of the Directive are intended – while supporting the transition to digital economy – to guide the adaptation of national laws to the special features of artificial intelligence. As indicated in the Explanatory Memorandum, surveys have shown that AI liability is amongst the top three barriers to the use of artificial intelligence by European companies. The proposed regulations are to foster consumers’ trust in artificial intelligence, and as a result to promote its dissemination. 
 

The aim of AI Liability Directive

According to the European Commission, current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services, since under such rules it is the victim that needs to prove a wrongful action or omission that caused the damage, while the specific characteristics of artificial intelligence (its complexity and autonomy, and the so-called “black-box” effect, which means, more or less, that we do not know how artificial intelligence reached certain conclusions) may make it difficult or even impossible for victims to satisfy the burden of proof.  

According to the EC, such a situation may, firstly, deter the victims from claiming compensation altogether, and, secondly, cause legal uncertainty, as national courts will attempt to apply the existing national liability rules, adapting them on an ad hoc basis to AI systems to come to a just result, which will negatively impact the willingness of businesses to develop artificial intelligence. According to the Commission, also the differences in national jurisdictions would be of considerable importance as suppliers of AI-enabled products or services offering them in the EU market would potentially face many different regulations.

What exactly is to be regulated by the Directive

The Directive applies to non-contractual civil law claims for damages caused by AI-enabled products and services where such claims are brought under fault-based liability regimes. The Directive is to apply only to claims for compensation of damages that occur as from the date of its transposition. The new regulations will facilitate the pursuance of claims, for example, of persons harmed in the process of recruitment with the use of artificial intelligence.   

Firstly, the provisions of the Directive stipulate that in the case of finding fault where a causal link seems likely, there exists a presumption of causality which will make it possible to avoid problems in proving that there exists a causal link between the fault and the damage, which might be very difficult in the case of complex AI systems. Secondly, the access to evidence from companies and providers of high-risk artificial intelligence systems is to be facilitated. 

Interestingly, the Directive stipulates that claims for damages may be brought not only by the injured person but also by persons that have succeeded in or have been subrogated into the injured person’s rights (assumption by a third party of another party’s legal right to collect debt or damages), as well as by person acting on behalf of one or more injured parties. 

Presumption of a causal link

A crucial facilitation in claims for damages caused by artificial intelligence is to be the presumption of a causal link between the defendant's fault and the output produced by the artificial intelligence system or the failure of the AI system to produce an output. Such a presumption is to be made provided that:

  1. the claimant has demonstrated the defendant’s fault (or the fault has been presumed consisting in the non-compliance with a duty of care, as described in item 2 below – Disclosure of evidence);
  2. based on the circumstances of the case, it can be considered reasonably likely that the fault has influenced the output produced by the AI system or the failure of the Ai system to produce an output; and 
  3. the claimant has demonstrated that the damage was caused by the output produced by the AI system or the failure of the AI system to produce an output.

In the case of a claim for damages against a provider of a high-risk AI system subject to the requirements laid down in Chapters 2 and 3 Title III of the AI Act  or against a person subject to the provider’s obligations (pursuant to Article 24 or Article 28(1) of the AI Act – the manufacturer of the product, distributor, importer, user), the condition for demonstrating the defendant’s fault is met only where the complainant has demonstrated that the provider (or, where relevant, the person subject to the provider’s obligations) failed to comply with any of the following requirements, taking into account the steps undertaken in and the results of the risk management system:


a)    the AI system makes use of techniques involving the training of models with data which were not developed on the basis of training, validation and testing data sets that meet the quality criteria (defined in Article 10(2) to (4) of the AI Act);
b)    the AI system was not designed and developed in a way that meets the transparency requirements laid down (in Article 13 of the AI Act);
c)    the AI system was not designed and developed in a way that allows for an effective oversight by natural persons during the period in which the AI system is in use pursuant to Article 14 of the AI Act;
d)    the AI system was not designed and developed so as to achieve, in the light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity pursuant to Article 15 and Article 16, point (a) of the IA Act;
e)     the necessary corrective actions were not immediately taken to bring the AI system in conformity with the obligations laid down in Title III, Chapter 2 of the AI Act or to withdraw or recall the system, as appropriate, pursuant to Article 16, point (g), and Article 21 of the AI Act.

Different conditions have been laid down for demonstrating fault in the case of claim for damages against a user of a high-risk AI system (the user means any entity using an AI system under its authority, except where the AI system is used for a personal non-professional activity). The condition for demonstrating fault shall be met where the claimant proves that the user: 


a)      did not comply with the obligation to use or monitor the AI system in accordance with the accompanying instructions of use or, where appropriate, with the obligation to suspend or interrupt its use pursuant to Article 29 of the AI Act; or
b)     exposed the Ai system to input data under its control which is not relevant in view of the system’s intended purpose within the meaning of Article 29(3) of the Act.

The defendant may rebut the above presumption in the case of a high-risk AI system by demonstrating that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. In the case of non-high risk AI systems, the presumption of causality is applicable only if the court determines that it is excessively difficult to prove the causal link.

The presumption of causality is not applicable, as a rule, to “non-professional users” of an AI system, unless such a user has materially interfered with the conditions of the AI system operation or has been required and able to determine the conditions of operation of the AI system but failed to do so. This means that even a non-professional user, in order to protect itself against the application of causality presumption, should comply with the instructions for use or with other applicable duties of care.  Obviously, the presumption of causality is in each case rebuttable.

Disclosure of evidence

The possibility to order the disclosure of relevant documentation by a high-risk AI system provider or user that is a potential defendant is an important measure facilitating the proceedings. Pursuant to Article 3, the court may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. The conditions for such an order to be issued are:


a)  the potential claimant submits facts and evidence sufficient to establish the plausibility of the contemplated claim; and 
b)  the claimant has made all proportionate attempts to gather the evidence from the defendant (and the defendant has refused to provide evidence or not replied). 


Since the evidence the disclosure of which may be ordered by the court may constitute trade secret, the courts are to be authorised to order specific measures to preserve such evidence and to limit the extent of information disclosed to the evidence proportionate and necessary to sustain the claim (potential or filed). A person ordered to disclose evidence must also be given the opportunity to respond to such an order.

It’s important to note that failure to comply with an order to disclose evidence results in a presumption that the defendant has failed to comply with the duty of care, in particular in the circumstances referred to in Article 4(2) or (3) of the Directive, when said circumstances were to be proved by the evidence requested; consequently, the defendant’s fault is presumed.   

Evaluation and entry into force

Five years after the transposition of the Directive, the Commission is to carry out the Directive’s review regarding the achievement of its objectives as well as the need for provisions on liability for claims against operators of certain artificial intelligence systems, and the need for insurance coverage in this area.

The time limit for adopting the necessary transposition measures is, according to the proposal, two years after the Directive enters into force. 

Article by attorney Aleksandra Piech.

1   Proposal – Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts of 21 April 2021

 

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?