Tech, Media and Comms

Present and future of the regulatory framework for artificial intelligence

Published on 20th Feb 2019

Although the concept of artificial intelligence is still unknown for some people, the truth is that, since its discovery, it has become one of the most transformative forces of our time. In this article, we review the concept of artificial intelligence, its implications and current regulatory framework.

The term "artificial intelligence" appeared for the first time in 1956 and, according to one of its founders, Marvin Minsky, it is "the science that makes machines do things that would require intelligence if done by men". With this in mind, we could say that the concept of artificial intelligence involves a number of different technologies based on computer programs, data and algorithms, which not only perform scheduled tasks mechanically, but are also capable of learning from experience (like a human would), thus being capable of making judgements and independent decisions.

Since its creation, artificial intelligence, like any transformative technology, has presented new problems, but also new legal and ethical challenges, which leads us to question whether our legal system is prepared to deal with those new developments that are already part of our day-to-day life. Although the law provides tools to respond to this phenomenon, it is clear that the exponential growth and impact that artificial intelligence has in our everyday life (autonomous vehicles, home robots o cybersecurity, among other things) requires an ongoing dialogue, which should include legal, ethical and scientific disciplines.

This is why in 2016 the six major technological companies (Amazon, Apple, Google, Facebook, IBM and Microsoft) set up a non-profit organization, named Partnership on Artificial Intelligence, the goal of which is to study and formulate best practices on artificial intelligence technologies.

In Europe, it was not until June 2018 that the European Commission set up a High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG seeks to (i) increase public and private investments in artificial intelligence to encourage uptake by the public, (ii) prepare for the socio-economic changes that the artificial intelligence will bring about, and (iii) ensure an appropriate ethical and legal framework to strengthen European values. Some months later, in December 2018, the said group, which consists of 52 experts representing academia, industry and civil society, published a draft of the Ethics Guidelines for Trustworthy AI that was open for consultation until January 2019. In March 2019 the expert group will present their final guidelines to the Commission which will analyse them and propose how to take this work forward.

The draft, which sets out a framework for joint action and closer cooperation between Member States, provides two premises for making artificial intelligence more "trustworthy". Firstly, it should respect fundamental rights, applicable regulation and core principles and values, ensuring "ethical purpose", and secondly, it should be technically robust and reliable, since a lack of technological mastery can cause unintentional harm.

Based on these two premises, the expert group has produced a set of guidelines that consist of three chapters. The first chapter establishes that artificial intelligence should be human-centric and, therefore, grounded in fundamental rights, societal values and the ethical principle of doing good ("beneficience"), autonomy of humans, justice and explicability (operate transparently). Particular attention is paid to situations with asymmetries of power, in which vulnerable groups such as children, persons with disabilities or minorities may be involved. One example of this is the situations that arise between employers and employees, or businesses and consumers.

The second chapter incorporates a series of requirements for the development and use of reliable artificial intelligence to make sure that the maximum benefit is gained from the opportunities and situations it creates. The expert group establishes ten requirements that artificial intelligence must meet in order to be considered trustworthy, among others, the principle of proactive responsibility, data governance, design for all, governance of the artificial intelligence autonomy based on human oversight, non-discrimination, respect for (an improvement of) human autonomy, respect for privacy, robustness, safety and transparency. In order to ensure that these requirements are satisfied, the guidelines propose a series of technical methods (such as traceability and auditability) and non-technical methods (among others, code of conducts, education and awareness to foster an ethical mind-set and social dialogue).

In the last and third chapter, the expert group provides checklists to help assess whether any artificial intelligence that may be developed in the future is "reliable" and meets the requirements established above. Consequently, the target audience of this chapter are those individuals or teams responsible for dealing with the design, development and deployment of artificial intelligence systems.

Pending the presentation of the final version, the draft provides the foundations around which the legal framework for artificial intelligence will be developed in the European Union, and encourages reflection and debate on a matter that may have a great impact on a myriad of areas, both at European and global level. However, as technology and our knowledge thereof evolves, the draft that will be finally adopted would probably need to be regularly updated to cover new developments over time.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?

Upcoming Events