Tech, Media and Comms

Artificial Intelligence: the real legal issues

Published on 23rd Oct 2017

If you’re reading this, the chances are that  you will have come across the concept of Artificial Intelligence in your prior researches.  Like most issues “du jour“, a lot has been written on the topic which falls into two categories – material either presupposes a level of prior computer- science based knowledge or; more commonly is thinly disguised salesware which doesn’t convey a lot.

This article is based on a presentation I was very kindly invited to give to the SCL Annual Conference at the Institution of Engineering in London in June and hopefully will provide the uninitiated (and semi-initiated) with a firm grounding on which to base a practical assessment and understanding of the legal risks posed by the use of Artificial Intelligence.  As such it will hopefully be accessible to those legally minded people with an interest in this technology.

The legal risks I have categorised into the “Causation Challenge” and the “Big Data Challenge”

Before we get into a discussion of these challenges however, it is worth looking briefly at what the current business motivators are for are pushing the boundaries of AI forwards and where the current technological developments are, if only to gain a wider appreciation of the real world applications which are driving the use of AI.

I should say at the outset that within AI it is machine learning – the capacity for machines to take learn and take independent decisions, that is creating serious conceptual difficulty for lawyers.  At the heart of this conceptual struggle is the knowledge that whilst we can teach machines to learn and develop independent behaviours, on a scientific level, we are still at a loss to understand how they do so, and this can lead to some very unpredictable outcomes – see for example the Google Brain neural net, tasked with keeping its communications private that completely independently developed its own encryption algorithm.[1]

There are several real-world “machine learning” applications which are driving developments in the technology:

Image processing and tagging

Image processing as it suggests requires algorithms to analyse images to get data or to perform transformations.  Examples of this include identification/image tagging – as used in applications such as Facebook to provide facial recognition or to ascertain other data from a visual scan, such as health of an individual or location recognition for geodata; Optical Character Recognition – where algorithms learn to read handwritten text and convert documents into digital versions

3D Environment processing

3D environment processing is an extension of the image processing and tagging skill – most obviously translated into the skills required by an algorithm in a CAV or “Connected and Autonomous” vehicle to understand its location and driving environment.  This uses image data but also potentially radar and laser data to understand 3d geometries.  Typically this technology could also be used in free roaming robot devices including pilotless drones.

Text Analysis

These are processes which extract information or apply a classification to items of text based data.  This could include social media postings, tweets or emails.  The technology may then be used to provide filtering (for SPAM); information extraction – for example to pull out particular pieces of data such as names and addresses or sentiment analysis – to identify the mood of the person writing (as Facebook has recently implemented in relation to postings which are potentially suicidal[2]).  Text analysis is also at the heart of Chatbot technology – allowing for interaction on social media.

Speech Analysis

Speech processing takes equivalent skills to those used for textual documents and applies them to the spoken word.  It is this area which is seeing an incredible level of investment in the creation of personal digital home assistants from the likes of Amazon (with its Echo device), Microsoft with Cortana, Google’s Home device and Apple with Siri (and now the recently launched “Homepod”).

Data Mining

This is the process of discovering patterns or extrapolating trends from data.  Data mining algorithms are used for such things as Anomaly detection – identifying for example fraudulent entries or transactions; Association rules – detecting supermarket purchasing habits by looking at a shopper’s typical shopping basket; and Predictions – predicting a variable from a set of others to extrapolate for example a credit score

Video game virtual environment processing

Video games are a multi-billion dollar entertainment industry but they are also key sandboxes for machines to interact with and learn behaviours in relation to other elements in a virtual environment,  including interacting with the players themselves.[3]

So a quick overview of the practical applications.  Let’s take a look at the legal challenges.

 

The Causation challenge

So what do I mean by “Causation challenge” ?

Well, what I am referring to here is how traditional liability questions are settled. Basically this is through the attribution of fault by application of causation principles.

Fault drives compensation.  Whether it is Tortious, Contractual or – to a more limited degree, Consumer protection liability – it is this attribution which enables parties injured financially or physically to seek redress and obtain compensation for such damage.  Consumer protection is obviously strict liability by its nature, but even here you need to establish the existence of a defect.

As lawyers, we all understand that fault attribution is driven by the mechanism of causation.  If you can pin point the cause, then you can assign the blame.  Whether it is establishing breach of a duty of care in tort, breach of an express or implied term in a contract or establishing a defect in consumer protection liability, in each case the fault or defect must have caused the loss.

The real issue with AI powered devices is that as increasingly the decisions that they take become more and more removed from any direct programming and are in turn more based on machine learning principles, as we have discussed above, it becomes harder to attribute the question of fault.

Our existing liability frameworks deal comfortably with traceable defects – machine decisions that are traceable back to defective programming or incorrect operation.  They begin to fail however where defects are inexplicable or cannot be traced back to human error.

We’re now seeing regulators thinking about and grappling with this problem.

As I suggested in a 2016 ITechLaw conference paper[4] and was subsequently advocated by the European Parliament Committee on Legal Affairs in its Report on Civil Law Rules on Robotics[5], in January 2017 one of the ways to “fix” this would be to introduce a strict liability system backed by a licensing fund and a certification agency.

The way this would work would be to introduce an assessment system for a robotic or AI device which would require the payment of a levy to release that device onto the open market.

I like to refer to these certification agencies as “Turing Registries[6] after the great computer pioneer, Alan Turing, although the European Parliament uses the rather more prosaic EU Agency for Robotics and Artificial Intelligence.

The levy would go into a fund which would enable the payout of compensation in the event a risk transpired.

This system has some historical precedent as a variety of it is already in force in New Zealand, in the shape of the Accident Compensation Act 1972[7] which statutorily settles all forms of personal injury accidents (including Road Traffic Accidents) and has effectively abolished personal injury litigation in that country.

I personally prefer this as a solution as it is essentially scalable – you can imagine that as machines become more and more capable, they could attract higher licensing charges to fund a greater variety of risks.

What has the UK been doing to address this challenge ?  We’ve seen the most movement in the CAV space.

The UK Government recently concluded its consultation document on driverless vehicle development – including an assessment of the way in which such autonomous vehicles should be covered by insurance.  This was the snappily titled “Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies”[8] which ultimately led to the Vehicle Technology and Aviation Bill, presented by Chris Grayling during the last parliament.

The calling of the 2017 general election has meant that this proposed legislative measure has automatically failed – however the measure was been substantially resurrected in the Autonomous and Electric Vehicles Bill, announced in the 2017 Queen’s Speech[9].

As at the date of this article, we do not have the text of the new measure, so I refer to the predecessor bill here, as it is clear that the UK government intends to preserve the position adopted in the now defunct Vehicle Technology and Aviation Bill.

So what are the legislative proposals?  Rather than go down the strict liability route I mentioned earlier, the government has chosen to address the issue of driverless cars from the perspective of gaps in current insurance coverage caused by fully autonomous driving.

This is an essentially pragmatic response that will probably work in an environment where there is a mixed demographic of driverless cars and human piloted ones – it also avoids systemic change to the insurance industry.  It does however completely sidestep the causation challenge.  Crucially, the proposed measure relies very heavily on the ability of insurers to subrogate and therefore bring claims of their own against other third parties, including manufacturers.  This will of course be hugely problematic for insurers if the relevant fault or defect cannot easily be traced[10].

Section 2 of the Vehicle Technology & Aviation Bill as drafted provided that “where…an accident is caused by an automated vehicle when driving itself…the vehicle is insured at the time of the accident, and…an insured person or any other person suffers damage as a result of the accident, the insurer is liable for that damage.

In essence the principle enshrined in the bill was that if you are hit by an insured party’s vehicle that is self-driving at the time, the insurer “at fault” paid out.  If you have comprehensive cover then you will also be insured for your own injuries.  If the vehicle at fault is not covered by insurance then the Motor Insurers Bureau will pay out in the usual way and seek to recover its losses from the owner of the uninsured vehicle.   As noted above, it is very likely that this approach will be translated into the new Autonomous and Electric Vehicles Bill.

So, a quick walk through current legislative proposals for AI enabled devices – as represented by the automotive industry.  Unsurprisingly, and rather disappointingly, we are looking at a pragmatic stop gap approach which is in effect “kicking the can down the road” rather than confronting the issue.  Sooner or later the spectre of causation will need to be confronted.

The Big Data challenge

Of course the other challenge facing users and adopters of AI is from within.

The Big Data challenge as I have called it has two overlapping facets: the first of which is the way in which the industry capitalises on the terabytes of smart data generated or “streamed” by “smart devices” – and again driverless cars and the transport industry are leading the way on this.

Secondly, the availability of predictive analytics modelled by AI is transforming the way in which business serve customers and has the potential to create serious issues around privacy.

Whilst such technologies are being used to lower costs and provide greater competition in a number of industries, such as for example insurance, there is also a greater commensurate risk for people to become disenfranchised or excluded – taking the insurance or finance markets as an example – through the withdrawal of insurance or finance products as a result.

1. Smart Streaming

“Smart streaming” of data has already drawn the significant attention of regulators.  The European Commission has recently published its Strategy on Co-Operative Intelligent Transport Systems or “CIT-S”[11] which sets out its approach to developing a standardised intelligent transport infrastructure allowing vehicles to communicate with each other, with centralised traffic management systems and other highway users.    The potential for such data to be misused is clearly troubling – for example not only could a CAV identify a journey destination, it could also potentially also report back on driving habits and theoretically identify any traffic offences.

In the context of our discussion such data could obviously also have an impact on the manner in which insurance is offered to the user of the vehicle when human piloted.

The policy adopted by the EU Commission has been to identify such data as personal data and therefore afford it the protection of the European data protection framework.

As they identify in their report – “the protection of personal data and privacy is a determining factor for the successful deployment of co-operative, connected and automated vehicles.  Users must have the assurance that personal data are not a commodity, and that they can effectively control how and for what purpose their data are being used.”[12]

2. Predictive Analytics

In the context of the Big Data challenge, we should not ignore the disruptive effect of AI driven predictive analytics either.  The most immediate influence of this is best illustrated by the systemic impact predictive analytics are having on the Insurance industry.

Quite apart from the potential jobs impact in relation to claims handling and processing – (to take an example Fukoka Mutual Life in Japan is laying off all of its claims handlers in favour of IBM Watson[13]), the technology is transforming the way in which insurance companies model risk and hence price premiums.

At its most simple level, insurers model risk by way of a concept known as “pooling”.  Insurers put large groups of similar people together, using common risk modelling points, and their premiums are used to fund a “common pool”.

In any given year, some will need to be paid out and some will not.   As long as the common pool remains liquid – the system continues to work.   Predictive analytics function by giving insurers more detail about individuals and associated risks.  This allows for far more accurate risk pricing and removes the need to pool broad groups of people together.

Obviously this gives rise to a whole host of ethical questions about the availability and pricing of insurance.

Car driving behaviours are one obvious example which could lead to “risky” behaviours driving up insurance pricing and “safe” behaviours lowering it.  Even more serious is the impact of advanced analytics on genetic data which might model susceptibility to genetic disease and therefore impact the pricing and availability of life insurance coverage.  The state of the art is such now that this doesn’t even need genetic material to be sampled.

So, for example US startup Lapetus[14] can analyse a “selfie” using thousands of different facial data points to determine how quickly an individual is ageing, their gender, body mass index and whether they smoke.  Their system predicts life expectancy of individuals more accurately than traditional methods.

Postscript – Some thoughts for transactional lawyers

This is all very well, but where does this leave the transactional lawyer faced with the task of contracting for an AI based system?

It is probably the causation challenge that requires more thought in relation to contractual liability, as the challenges posed by the use of big data will be unchanged whether they are processed by conventional or artificially intelligent systems – indeed we are all aware (or really should be) of the onset of the GDPR and the onset of changes that is likely to bring.

Causation issues remain problematic.  What will need to be analysed in any real-life situational context is the propensity for an AI system to make decisions which will have liability impacts.  The need here will be for both parties to avoid the “rabbit hole” of claims for damages at large, which of course largely depend on causation and proof of loss.

I would suggest in the “mission critical” applications, where an unpredicted decision is made by an artificially intelligent machine, the need will be to sidestep causation and focus on the loss itself.  This will inevitably draw us down the path of indemnity based recovery mechanisms.  My prediction – expect to see much more of these in your contracts in the future, expressed on a much wider basis.

This article first appeared on Society for Computers and Law Journal.


[1] https://arstechnica.co.uk/information-technology/2016/10/google-ai-neural-network-cryptography/  “Google AI invents its own cryptographic algorithm; no one knows how it works

[2] See for example http://www.bbc.co.uk/news/technology-39126027Facebook artificial intelligence spots suicidal users“, 1st March 2017

[3] See for example the July edition (E308) knowledge feature of Edge Magazine – “Machine Language” which discusses new startup SpiritAI – a business that has developed an intelligent character engine for NPCs (non player characters) in video games, thus obviating the need for thousands of pages of pre-scripted dialogue.

[4]Liability issues in Autonomous and Semi-Autonomous Systems“, John Buyers, available online at Osborne Clarke’s Website (and for ITechLaw members on the ITechLaw website).  See http://bit.ly/2tQk78i

[5] See http://bit.ly/2lmorKW

[6] The term “Turing Registry” was coined by Curtis E.A. Karnow in his seminal paper, “Liability for Distributed Artificial Intelligences“, see Berkeley Technology Law Journal 1996, Vol 11.1, page 147

[7] See http://www.nzlii.org/nz/legis/hist_act/aca19721972n43231/

[8] See the government response to the consultation at http://bit.ly/2iLd23x

[9] See: https://www.gov.uk/government/speeches/queens-speech-2017

[10] What seems to have been overlooked in the government’s analysis is the complete unsuitability of current consumer protection law (as embodied in the Consumer Protection Act 1982) to deal with liability issues caused by AI devices.  The first concern is that the law is designed as a measure to protect consumers (ie real live persons, as opposed to legal persons) from personal injury.  Its focus is not on pecuniary loss and damage.  Secondly, the definition of “Product” under the CPA is not clear whether software and/or other products of an intellectual type are included, and thirdly there is the so-called “Developmental Risks Defence” which provides a defence to the manufacturer “if the scientific and technical knowledge at the time the product was manufactured was not such that a producer of a similar product might have been able to discover the defect” (s4(1)e) – clearly a defence which will provide maximum wriggle room for the developers of AI technology!  See my 2016 paper (referenced earlier) for a more detailed discussion.

[11] See: http://ec.europa.eu/energy/sites/ener/files/documents/1_en_act_part1_v5.pdf

[12] Para 3.3 of the Report (Privacy and data protection safeguards), page 8

[13] See: https://www.theguardian.com/technology/2017/jan/05/japanese-company-replaces-office-workers-artificial-intelligence-ai-fukoku-mutual-life-insurance

[14] See: https://www.lapetussolutions.com/

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?