In this Insight, we look at some of the issues on the horizon for users and implementers of AI, in 2020 and beyond.
The first trend to identify is that the technology industry is now waking up to the practical and real use case benefits of AI (specifically that part of AI which is Machine Learning): and moving away from esoteric discussions about the ethics of the technology. And the first issue which is central to any AI use case is data.
Data was big in 2019 and is going to stay high on the agenda through 2020: both in the conventional sense of GDPR compliance and also in the context of data exchange and data sharing, which as an industry is going to grow exponentially in 2020.
Issues with the GDPR
The first point is the intersection between the requirements of the GDPR and getting to use your AI solution.
As we have discussed previously, the way in which the GDPR is structured is fundamentally misaligned with “black box” predictive AI solutions – in particular, the requirements of purpose limitation and data minimisation create difficulties for systems which provide benefits that could not have been anticipated at the outset of their creation.
As Alexander Fleming very famously put it, you sometimes get what you’re not looking for – as was the case with Mount Sinai hospital which ended up with a machine learnings system that was expert in diagnosing adult schizophrenia (something the human doctors couldn’t do) and didn’t quite understand how it got there!
In addition to this, the requirement for businesses to provide privacy information notices to data subjects specifying clear information about the logic used in the underlying system can be a daunting task when inherently opaque machine learning systems are used.
Another challenge – and this is probably the most difficult issue to circumvent – is that when it comes to “automated processing” under Article 22, the GDPR makes it much harder to rely on the lawful processing ground of legitimate interests. You need to rely on the consent of the data subject instead.
Throw into the mix the fact that European regulators are increasingly flexing their new and enhanced powers in relation to data breaches and you have a potentially combustible mix which threatens to have a “chilling” effect on AI investment and development in Europe. Given this, it is perhaps unsurprising that the Irish Information Commissioner recently declared that the GDPR’s consent model was effectively “broken” when it came to AI.
In what is perhaps a rear-guard attempt to deflect some of this criticism, the European Commission has issued a raft of guidelines on both the ethics and legality of AI systems in Europe, culminating with an announcement by Ursula Van Der Leyden, the new commission president, that guiding principles for comprehensive AI regulation in Europe would be put in place in the “next 100” days. These 100 days are almost up, so perhaps we should expect something soon from the EU. However this does beg the question why does the EU thinks creating more regulation and legislation will help to make it more competitive?
Data interchange and data sharing
We live and work in a landscape where different levels of integration and data sharing apply across different sectors. So, while it is relatively easy for financial services institutions to share data across open banking standards, for example, the construction and property industries work with technology pretty much as they did in the 1980s.
Industry is beginning to wake up to the potential of standardising data interchange formats – and the transformative effect that this will have in particular in relation to technologies such as AI. For instance, coupling standardised data interchange with the construction industry could lead to the creation of buildings able to far more efficiently manage energy requirements, or integrate with city wide utility grids on an autonomous basis.
This has led to what is known as the Data Trust initiative in the UK. A “data trust” is a digital construct which enables parties to share datasets through an impartial intermediary – in much the same way as a trustee holds assets for a beneficiary. We are working with a number of businesses that are trying to set up exchanges that would operate in this way.
That’s the thinking – in reality no-one quite understands how these data trusts will operate or be structured – it is something that the UK’s Open Data Institute and the UK Government’s own Centre for Data Ethics and Innovation is currently considering. Expect 2020 to be a year in which data trusts have much more prominence in the public narrative.
Data trusts are still unfortunately some way off. Where we’re at now, and what you’re likely to see more of in 2020 is much more arbitrage in datasets between businesses on a contractual basis by so called “master data management”, or MDM, agreements.
MDM agreements attempt to put in place arrangements for the systematic sharing of data between two or more organisations. For example, Osborne Clarke recently worked on MDM agreements between organisations such as large National Health Service hospitals in the UK that are looking to jointly exploit CAT and CT scanner data in order to improve image analysis algorithms and ultimately patient care.
Practical Use Cases: AFR and NLP
2020 is going to be the year where we move increasingly away from theoretical discussions about the ethics of AI and Machine Learning towards the practical application of the technology in real world situations. That is not to say that ethical discussions aren’t important – it’s just that they will be tempered and refined by use of AI and ML “in the field”.
The two areas we see really booming over the next 12 months are Automated Facial Recognition (AFR) and Natural Language Processing (NLP) – partly because these are the areas that have seen the most improvement in the algorithms underpinning them.
Until now, much of the debate around AFR has concerned its use in the public or government sector – especially around surveillance use by the police.
The first ever court case on AFR was brought against the South Wales Police last year. This helped to clarify some of the legal circumstances in which the police can use AI to identify potential suspects and offenders. The case was brought by Civil Liberties groups and raised a number of issues around GDPR compliance in relation to biometric data gathering, and Article 8 of the European Convention on Human Rights, which calls on governments to respect an individual’s private life. In that case, it was held that the police were within their rights to deploy the technology and had undertaken all of the regulatory steps that were required of them.
Whilst police (and governmental) use of the technology will continue throughout 2020, we can expect to see private sector use grow as well. Retail is seen as an especially attractive market to target. AFR can enable “take and go” shopping (much as in the Amazon Go model, where you simply walk in, pick up an item and go), as well as improving shopper analytics and providing enhanced opportunities for targeted individual marketing.
As mentioned above, however, AFR however raises significant issues in relation to compliance with GDPR compliance with laws around CCTV surveillance. Some of these are relatively intractable. 2020 is likely to see that tension brought into the foreground – particularly with EU regulators, which are likely to try to block what some see as the inevitable adoption of this technology.
NLP is another area which has seen significant improvement in the last 12 months and as a result expect to see more NLP offerings coming on-stream in 2020.
To date, this has been the confine of voice-enabled assistants such as Amazon’s Alexa. Expect 2020 to be the year when this technology is directed more at enterprise – and is used to enhance offerings or applications that already entail some form of voice monitoring. For example, in situations where calls need to be monitored under financial services legislation such as MIFID II, or in a telephone job interview context, NLP can go one stage further and identify whether or not the relevant call operators are showing stress or are acting under other emotional pressure.
Again, use of voice enabled NLP will raise very similar regulatory questions to AFR in terms of the processing of biometric data when it is being used to identify individuals – so we should expect to see similar showdowns on this front as well.
The final area which is worth mentioning is that of Artificial Intelligence “as-a-Service”.
This is in essence a pre-packaged service which is offered via the cloud.
One of the biggest blocks to date for small and medium sized enterprise in the deployment of machine learning systems and tools has been the level of investment and expertise that is required to successfully build and train such systems. Unsurprisingly, this requires a degree of specialist skill that is beyond the means of most in the SME sector and possibly even some larger businesses – it is also in many cases, “non core” in terms of business objectives.
Coupled with this has been a clear trend in the outsourcing market, driven by ubiquity in cloud computing, to move away from traditional “on premise” solutions to service based “on demand” SasS (Software as a Service) solutions or cloud-based architecture for the delivery of complex managed services.
AIaaS solutions are broadly speaking structured either as data, compute or pre-packaged services.
AIaaS data solutions provide the convenience of ready-made pools of data to enable fast and effective training of machine learning models for common or generic problems – these might, for example, be comprised of masses of facial images.
Compute services enable common infrastructural computing tasks to be carried out that are closely integrated with machine learning solutions – such as batch processing, and pre-packaged AIaaS is the provision of ready-made AI applications on-demand that can be integrated into enterprise offerings.
Use of AIaaS solutions, whether pre-packaged or via data or compute models, clearly brings convenience and speed of implementation to many “standard” machine learning solutions, such as facial recognition, language translation or NLP. Over the next 12 months these offerings will increasingly be heavily marketed to enterprise users – with convenience and ease being the key selling messages.
However, it is important not to overlook from a legal perspective that such solutions introduce additional levels of complexity to what may already be a significantly complex outsourced arrangement.
Disclosure levels and transparency are likely to be key determinants for the customers of AIaaS providers, but will prove to be a very difficult interface for the platform providers themselves, who will be naturally incentivised to keep as much confidential as possible, for fear of giving away valuable proprietary information.
Complex decision-making machine learning systems are likely to be harder to manage from a liability perspective if they are dependent upon AIaaS offerings. This is because in the event of a contract default – or even circumstances where you need to be able to justify or explain the consequences of a particular decision to an external third party, such as a regulator – you would be faced with having to work back through the licence terms granting access to the AIaaS platform. You would then have to determine the extent to which the provider of the AIaaS cognitive computing platform itself is willing or able to provide an explanation or demonstrate that it is not the cause of the relevant default – not of itself a trivial problem!
For all of the challenges, 2020 is set to be a breakthrough year for AI in an enterprise context. To discuss what this means for your business or if you have any other AI-related questions, please contact John Buyers, Head of AI at Osborne Clarke, or your usual Osborne Clarke contact.