The European Union opts for Artificial Intelligence and defines a strategy for its proper development and safe use
Published on 20th Mar 2020
The European Commission, as part of its new digital transformation strategy, published the White Paper on Artificial Intelligence, which presents the opportunities this technology offers to boost Europe's digital economy and outlines the approach for its implementation, while addressing the main risks associated with certain uses of this technology.
As already pointed out in our article, "Present and future of the regulatory framework for artificial intelligence", Artificial Intelligence (“AI”) is a phenomenon that is increasingly present in our lives and, as such, must be addressed from a European perspective to avoid the fragmentation of the single market as well as to increase the Member States’ capabilities in this area in order to ensure its safe use. For this reason, the European Commission published the "White Paper on Artificial Intelligence - a European approach to excellence and trust", which is subject to public consultation until 31 May 2020 (if nothing changes) for comments on the proposals and whose objective is to lay the foundations for the European strategy in this area and lead the technological race that is currently being disputed between China and the United States.
This document establishes precise guidelines on the limits of this technology development in order to avoid that its evolution detracts from the fundamental rights of citizens. It is articulated under two blocks: the first block includes a series of proposals to guarantee an "ecosystem of excellence" and the second block includes a series of regulatory elements to generate an "ecosystem of trust".
In the first block, the Commission aims, on the one hand, to promote the AI development and leadership in Europe, focusing its efforts on the work Member States must carry out (through various plans the Commission will present, as well as through strategies that each Member State will develop at a national level) and, on the other hand, to incentivize research and innovation as a way to improve European excellence in this field. To this end, the Commission proposes the creation of research centres to be at the forefront of the development to be able to compete with the leading institutes globally and whose fields of study are sectors in which Europe already has a great potential, such as transportation, finance, health and energy. In short, the Commission wants to avoid that the future regulatory framework for AI or the adaptation of the current legislation to this technology puts excessive obstacles in those fields where this technology does not entail a considerable risk.
Another aspect covered in this block is the need to promote educational initiatives through universities and research centres in order to reinforce and improve the general public knowledge in this subject. The Commission also highlights the need to enhance the access of small and medium-sized enterprises (SMEs) to this type of technology.
While the first block analyses the actions that should be carried out to enhance AI at a European level, the second block, under the title “ecosystem of trust” evaluates the aspects that should be taken into account, so that this technology does not negatively impact the rights and freedoms of citizens (among others, the protection of personal data, privacy and non-discrimination) and their security, given that these are the principal risks related to the use of AI due to its complexity, unpredictability and partially autonomous behaviour. The Commission suggests that these risks might result from flaws in the overall design of AI systems or due to the use of data without correcting possible bias.
In front of this scenario, the Commission offers a series of measures to lessen these risks and proposes, among other things, that all systems are transparent, subject to human supervision at all times and technically developed with robustness and accuracy to provide assurance. Without prejudice of compliance controls for AI systems as part of the surveillance tasks carried out by competent authorities in the single market, the Commission highlights the need to establish prior conformity controls to verify and guarantee the compliance with the requirements that are applicable to these systems. Likewise, the Commission opts in in its proposal for the distribution of compliance with the legal requirements applicable to the AI systems among the different economic operators involved (producers, distributors, service providers and users) and that those are applicable to all the economic operators that offer this technology in the EU, regardless of whether their establishment is within the boundaries of the EU.
For the development of AI systems in sectors where the risk is lower, the Commission proposes a voluntary labelling offering companies the opportunity to subject themselves to stricter regulations, receiving a kind of label or seal that users can easily recognise and that grant companies the benefit of offering reliable products in the market. Although this labelling system is completely voluntary, once the company opts to use the label, the requirements are binding.
Another topic discussed is the use of biometric data for the remote identification, and although its use identification is generally prohibited and can only be used in exceptional cases, the Commission, through this document, initiates a debate to discuss the conditions of these exceptions and attempts to identify circumstances, if any, that could lead to justify the use of AI for this purpose.
In summary, and although it is still too early to conclude whether these principles will be enough to guarantee Europe's leadership in AI, what we do know is that, based on this text, Brussels has taken another step to try to adapt to the disruptive digital landscape that the EU faces nowadays, where AI will play a fundamental role.