Tech, Media and Comms

European Parliament considers introducing legal rights and responsibilities for robots

Published on 13th Jan 2017

On 12 January 2016, a committee of the European Parliament approved a draft resolution on robotics and artificial intelligence (AI). The resolution, which will be debated by Parliament next month, contains a number of eye-catching proposals. If passed, the proposal could pave the way for robots to be given legal rights and responsibilities, raising a host of legal, ethical and societal issues.

Automation: creating opportunities and challenges

Automation is rolling out across an ever broader range of industry sectors, from farming to transport and healthcare, while the sophistication of devices concerned is also advancing rapidly.   The potential social impact is now becoming a matter of general discussion as more and more job roles look to be eroded or even replaced by computers (whether physically embodied as robots, or otherwise) capable of solving problems and performing tasks such as decision-making which normally require human intelligence.  But discussion of the legal implications is only just beginning.

AIs and IP

The possibility of IP rights for AIs, including the extent of such rights and by whom they should be owned, is one such area for consideration. In 2016, the Japanese government apparently approved an IP Promotion Plan 2016 which includes the principle of establishing a new protection system for intellectual property created by AIs.

This fascinating question spotlights some assumptions being made about the relationship between AIs and society at large.  Unless and until we question the status of a computer system as property belonging to some legal or natural person – normally, the legal or natural person who has invested in the hardware and software – there is no obvious need to start wondering about ownership of any IP rights in the works that the AI produces.  For instance, English law deems the author of a work created by software, as in programmes which write journalistic stock market updates or original music in the style of a given composer, to be the person who makes the arrangements necessary for the creation of the work.  Nothing limits this provision to computer systems less advanced than AIs.

Computers have of course moved on from the purely passive tools they were when the 1988 Copyright Designs and Patents Act was passed, towards being independent decision- and choice-making entities.   The works they produce may well go beyond the works that a human creator might produce because they can access and recall a greater scope of information (including existing literary and artistic works) and process it more thoroughly and more quickly than the human mind.

Granted, then, that such machines will produce works of at least as great originality and inventiveness as human minds, does it follow that intellectual property rights should be conferred on the AI itself?

What is the EU proposing?

In 2017, the European Union is beginning to grapple with these questions. A draft resolution, prepared and approved by the European Parliament’s Legal Affairs Committee, takes a radical approach.  It addresses not only matters of relative detail such as IP rights in robots’ creations, but also whether companies that replace human workers with smart machines should be taxed on the extent of robotic contribution to profits. It suggests that this might fund a universal living wage for the increasing number of permanently unemployed.  It even floats the possibility of creating a specific legal status as electronic persons for the most sophisticated autonomous robots.

Before moving to create a new category of legal entity, however, the legislators should pause to consider whether it is necessary or desirable to confer legal status on even sophisticated robots.  The instinct to do so may feel morally right. But legislators should question the origins of that instinct. It could be as simple as decades of science fiction on paper and on screen presenting human-like robots which evoke our empathy, and hence a feeling of fellowship, quite inappropriate to the real-world of smart machines.   Indeed, it can be argued that in humanising robots, we not only further dehumanise real people (by reducing the relative value ascribed to time spent on personal relationships) but also encourage poor human decision making in the allocation of resources and responsibility – at individual and institutional level.  Nominating an AI as blameworthy for a particular outcome implicitly exonerates the people or institutions which created the AI from their responsibilities in designing and setting it in motion.

But if the ‘human’ instinct wins out and advanced robots are to be seen as an independent category of legal person, with capacity to create and own property, then along with the legal rights they must also be able to assume responsibilities and liabilities.  The European proposal suggests a strict liability regime for damage caused by an autonomous robot, with damages funded by a compulsory insurance arrangement.  This could be funded by the makers and/or owners of robots, backed up where necessary by a mutual fund along the lines of the scheme currently applied for victims of car accidents.  But it also proposes that attribution of liability should be on a sliding scale depending on the extent to which the machine has taught itself, going beyond its designers’ knowledge.  This raises complex questions of causality which would significantly complicate the legal process of allocation of risk.

What’s next?

The current proposal is only a first step in a process for introducing and then debating any actual new law, and clearly there is scope for much to change along the way.  But businesses would be well advised to keep a close eye on that process, since the impact of some of the current ideas if enacted as law could be game-changing.  And though European legislative processes can be ponderous, where there is a consensus as to the principles, they can move very swiftly.  The other Committees consulted by the Legal Affairs Committee over its draft were largely supportive, suggesting such a consensus may not be difficult to achieve.

The European Parliament’s debate will take place in February 2017.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?

Upcoming Events