Dispute resolution

AI and the future of Dispute Resolution: Computer Says No?

Published on 22nd Jul 2021

Will your dispute be decided by a human being in the future?

OC_KI_104

In his recent keynote speech during the London International Disputes Week, Sir Geoffrey Vos, Master of the Rolls and Head of Civil Litigation in England and Wales, discussed the advantages of online dispute resolution and referred to the use of artificial intelligence (AI) and smart programming "to suggest resolutions (not, of course, to determine outcomes)".

How far have other countries gone in the use of AI to help resolve disputes? And what are the pros and cons of using AI not just to help with case management but also with reaching decisions about a case?

But first, what is AI?

The type of AI being considered in this area is "machine learning": essentially, where an AI system is designed to map patterns in the training data that is passed through it and then draws on that mapping to generate outputs and answers. The AI system is designed, and the training data selected, by human programmers, but the mapping process is automated and occurs without human intervention. As further data is passed through the system, so it continues to adjust and calibrate its pattern mapping to hone in on the statistically least wrong outputs, given the data passed through it. Once trained, an AI system can generate decisions, predictions, spot analogies and so on. It is typically very difficult to "look under the bonnet" of such systems. The developers will test the system to check that outputs are correct, but there will often be no transparency as regards how or why a particular output was arrived at.

Use of AI to date

Many countries have started to adopt AI in dispute resolution and a few are beginning to use it to replace judges (at least initially) for low value, low complexity disputes.

Estonia is at the forefront of this in Europe and has designed, and recently started to implement, a system which allows AI to issue a decision (that is appealable to a human judge) in cases involving disputes worth less than €7,000.

In China, the Hangzhou Internet Court, which operates 24/7, uses virtual judges to reach decisions in disputes involving digital matters. The average length of a case is around 40 days and although rulings can be overturned by human judges, no appeal was brought in around 98% of cases. Two further internet courts have now been established in Beijing and Guangzhou, using machine learning technology to automatically generate judgments for judges to review in certain cases.

Even where AI does not reach decisions, it is being used around the world to assist judges in reaching decisions. This has at times proved controversial: for example, in the US, COMPAS is a risk-assessment tool that purports to predict a defendant's risk of committing another crime and so assist judges determining whether to grant bail. But "baked in" biases in the historical data used to train the system meant that COMPAS replicated those biases in its predictions.

The benefits and downsides of AI

The appeal of AI in decision making is clear: there are, quite literally, millions of disputes in the world (fuelled in part by the huge rise in online payments and e-commerce) and AI could help to resolve smaller value claims more quickly, freeing up court time to hear higher value and more complex cases. For straightforward debt claims, the use of AI could result in claimants quickly and cheaply obtaining an enforceable judgement.

AI might have other advantages too: most notably, greater consistency – it will not have "off" days and will produce the same result, given the same criteria, on any given day.

Many would argue that AI could never work in a case in which the judge must decide which witness is telling the truth. AI is essentially driven by mathematics and has nothing to compare to the intuition and "gut feel" of a human being. But developments are being made in this field all the time: although lie detectors have traditionally received a bad press, repeated studies have shown that humans are not that great at detecting lies either. And advances in AI emotion recognition systems , looking for tell-tale physical indicators that someone is not telling the truth, mean that lie detectors are improving all the time.

AI is unlikely to assist parties in reaching a settlement though. While AI might be able to be adopted for use in litigation and arbitration in some cases, it is perhaps less likely to be a useful tool for mediation, where human persuasion, empathy and "common sense" are the key advantages of using a mediator. However, as with litigation and arbitration, small value, low complexity cases might be amenable to AI mediation.

Also, one of the chief issues with AI-issued judgments is that they provide no reasoning and so offer no way of allowing litigants to understand how exactly a decision has been reached. We are currently a long way off from overcoming that problem. Even if the lack of transparency in most machine learning systems could be overcome, the manner in which the decision is reached by an AI system is completely different to human reasoning, being based on the statistical weightings and biases set by reference to the training data. So the "reasoning" from an AI system may not shed much light on which elements of the evidence were decisive.

Even more fundamentally, there is the issue of whether litigants see AI as a fair and impartial way to resolve disputes. As the experience with COMPAS demonstrates, AI is only as impartial as the training data that is fed into it. But arguably, it is not more partial than judges and, moreover, it is free from the influences of human relationships and commercial interests that could, in principle at least, affect human decision making. As the recent Supreme Court decision in Halliburton v Chubb recognised, where your decision maker is paid by the parties, rather than out of the public purse, this has the potential to be problematic and give rise to suspicions about impartiality. If the foundations on which the machine learning are based can be shown to be fair and impartial, this aspect may become less of an issue for litigants.

There will undoubtedly be limits to how far AI will be able to progress: will we ever reach a stage where AI could assess complex legal arguments or even develop new legal principles? That is currently the stuff of science fiction. But then, not so long ago, so was the very idea of a robot judge.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?