Ethics, robots, liability and taxes: what should good look like?

Published on 3rd Mar 2017

On 16 February 2017, the European Parliament debated the report and recommendation put forward by the Legal Affairs Committee on the legal, ethical and social issues raised by the steady incursion of robots into increasing areas of human activity (see our previous article here).

Disappointingly for those hoping for some futuristic proposals to come out of Europe’s legislature, the plenary rejected the most controversial proposals such as imposing a tax on the use of robots to replace human workers in manufacturing to fund a universal basic income.

Despite this approach having some high-profile proponents, not least Bill Gates, it was probably the right decision.   Taxes are very blunt instruments and frequently have unforeseen consequences as businesses turn from the straightforward objective of maximising productivity and profit to that of minimising the tax liability on those profits.  Instead, a more complex and considered approach needs to be devised – though whether it is possible to foresee how robots will affect industries at large sufficiently accurately to devise any such scheme seems unlikely.  Robot surgeons, coffee bar tenders, care home assistants, and of course autonomous vehicles will disrupt their respective sectors of the economy in different ways and the rents derived from using them will not depend solely on the use of mechanical rather than human labour but on the costs structures, ownership models and intellectual property invested in using them as well.

The proposal for a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, was also rejected.

The liability question

One question that was taken more seriously was that of liability for damage caused by robots. By definition, if a robot is autonomous then any damage it causes is not a result of a conscious or negligent act by any human being.  Product liability is one way to address this, but the manufacturer of an autonomous robot might reasonably object to being held liable for an act if the robot had been instructed by its purchaser to carry out a task which caused the damage and which was not foreseeable by the manufacturer.

The proposed insurance approach suffers from a similar problem – even if all parties connected to a given robot are required to hold insurance for its actions there will still be obvious scope for argument as to which insurer should accept the claim.   The European Parliament did not descend to details of how liability should be handled, but called for establishing “a compulsory insurance scheme where relevant and necessary for specific categories of robots” to cover for the potential damage caused by robots.  One possibility is to create a general fund for all smart autonomous robots; another would be specific funds for different robot categories.

Regulating the robots?

Regulation of robots is unlikely at this stage.  It would be an obvious mistake to rush in with powerful laws imposing liability on developers or manufacturers only to find that as a result all of the research and development moves to less constraining jurisdictions. California is finding this happening with the testing of autonomous vehicles, given its regulations requiring developers to record and report incidents when other states are more laissez faire.

In any case, the problem is nuanced:  regulation, if and when any is introduced, needs to differentiate between the different technology and use cases, such as autonomous vehicles (including driverless cars and drones), industry robots (such as production line machines), and semi-autonomous robots used in a public setting (including in the workplace, public spaces and public facilities and in homes).

In the circumstances, the principle of continued monitoring and even preparation for a future age of robots is the only rational way forward, so the proposed European agency for robotics and AI, to be staffed by regulators and technical / ethical experts, may also find a place in the Commission’s ultimate proposal.

What’s next?

The European Commission’s proposals, when they emerge, will have to go through the Parliament and the Council before becoming law.  So, there are likely still to be a couple of years to go before any new legislation is in effect.  But at least the European Union is participating in the ‘first wave’ of legislation rather than being left following principles established by other jurisdictions such as the USA, Japan and South Korea.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?