CDEI research identifies barriers to data use and ethical artificial intelligence

Written on 8 Jul 2020

Public trust and regulation emerge as crucial from the Centre for Data Ethics and Innovation's in-depth analysis of AI and data use in the UK.

The UK government advisory body set up to inform policy around data-driven technology has released its first AI Barometer, which focuses criminal justice, financial services, health and social care, digital and social media, and energy and utilities, with further sector reviews planned.

The Centre for Data Ethics and Innovation (CDEI) report identified three barriers to greater use of data and adoption of artificial intelligence (AI), all of which involve the issue of public trust and raise the need for regulation: low data quality and availability; a need for coordinated policy and practice; and a lack of transparency around AI and data use.

The CDEI's report aims to provide "a major analysis of the most pressing opportunities, risks, and governance challenges associated with AI and data use in the UK, initially across five sectors".

While the CDEI's report does not define "ethics", it puts a strong emphasis on the need for AI and data to be used in a way that is considered trustworthy – and argues that public distrust is a "fundamental brake on innovation". There are echoes here of the EU Commission's approach to AI regulation, seeking to create an "ecosystem of trust".

Although there has been relatively little regulatory activity in this area so far, there are clear indications that more regulation is coming down the line. Companies that are active in the use of data and AI will need to be aware of the direction of travel of emerging regulation in order to factor it into development strategies. (See the box below for our recommendations for next steps).

Low data quality and availability

Data itself is identified as one of the barriers to ethical AI innovations. The CDEI recommends that this could be addressed by "investing in core national data sets, building secure data infrastructure, providing trusts data sharing mechanisms and ethical data regulation". This will help to increase data availability and data quality.

The AI Barometer observes that the energy and utilities sector could benefit from greater public trust in data sharing models in order to increase data collection, which in turn could be used to increase energy efficiency and support decarbonisation initiatives. The legal sector is actively engaged in developing innovative data sharing models that integrate ethical considerations alongside legal issues. Delivering these elements side by side can help to overcome barriers to data sharing. For example, OC Solutions is working with the UK start-up Engine B, a data sharing platform, which is hoping to transform and standardise the way in which data are exchanged between professional services firms.

A need for coordinated policy and practice

Governance and the lack of coordinated policy and practice receive particular attention in the report, because these are issues that can be addressed by regulation. Inadequate governance is seen as a major barrier to the ethical use of AI and data. The report observes that industry will fill regulatory vacuums with their own standards and that, without a clear regulatory steer, there is a risk of diverging and potentially undesirable practices developing. One of the functions of regulation is to ensure that public interest considerations, such as consumer protection and fair competition, are given due weight alongside corporate priorities.

The CDEI, in its discussion of risks and barriers in the financial services sector, observes: "Regulation, not just consumer awareness, will be necessary to uphold fairness in the use of AI and data". This is particularly important in sectors where the use of AI could lead to measurable changes to an individual's life, such as the potential impact of being given an incorrect credit score.

Data governance goes beyond what is required for formal legal compliance. The CDEI comments that, while the General Data Protection Regulation and Data Protection Act 2018 has recently strengthened the data governance landscape in the UK, there remain areas where a lack of clarity prevents organisations from sharing their data. For example, the report cites the reluctance of hospital trusts to share data because of uncertainty around the concept of meaningful consent. One solution for this considered by the CDEI in a June 2018 consultation – although not mentioned in its barometer report – is the concept of a data trust as a novel framework for sharing data.

A lack of transparency around AI and data use

The report notes that a lack of transparency from businesses and the public sector about their use and governance of AI and data-driven technology can prevent scrutiny and accountability. These risks are especially pertinent because of the potential high impact in incorrect decision-making tools; for example, a diagnosis tool developed for healthcare or misidentification of a police suspect.

Bias and a lack of explainability were identified as significant cross-sector risks. Algorithmic bias can be a result of the low quality of data available to develop AI tools. A lack of diversity in the workforce across the tech ecosystem is also a risk. In our view, "low quality" data is more likely to be an issue with the curation of the data set, rather than the quality of the data itself: for example, an insufficiently diverse and representative data set about people (which can result in biased AI outputs) is an issue that should have been investigated when deciding whether the data set was appropriate for the application concerned.

Osborne Clarke comment

The AI Barometer is an extensive and detailed report, which provides a meaningful deep dive into the risks and opportunities for the five sectors covered. With the CDEI planning to expand the sectors covered by the AI Barometer in the next 12 months, this is set to become a significant body of work around the use of ethical AI and data sharing.

The report underlines how the CDEI – created "to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies" – feeds directly into government policymaking. The need to involve the general public, and the central function of public trust that the AI Barometer highlights, is critical. The report echoes similar observations in the White Paper on Regulation for the Fourth Industrial Revolution, published in July 2019, which discusses the lack of a settled consensus or accepted set of social norms around different aspects of AI and data use. Bringing the public into the debate and prioritising trust are fundamental goals.

It is also essential that any regulatory framework considers all angles and does not ignore or over-emphasise a particular element of the problem. For example, the Information Commissioner's Office (ICO) has issued useful guidance on data issues in relation to AI tools but, reflecting its remit, this guidance is light on non-data issues that can be very significant. The risk is that businesses might assume that following the ICO's guidance is sufficient to manage legal risks, when it is only part of the picture.

The AI Barometer's focus on regulation for strong governance is significant for the drive and direction of UK policy. We have noted previously that businesses using AI often do not prioritise governance. The European Commission has already put proposals for regulating AI on the table in its recent White Paper, although the Brexit transition timings are likely to mean that the UK will not be bound to implement any resulting legislation. This does not mean that EU regulation will not have an impact on UK companies, as any sales into the EU27 will need to be EU-compliant. Although the ability to map its own path enables the UK to focus on the best interests of its public and businesses and to heighten its attractiveness to non-EU trading partners, it also creates a potential double compliance burden for UK-EU exporting businesses.

 

Takeaway: What should your business do now?

If your organisation is planning to develop its use of data or AI, you should think about putting in place ethical usage and governance policies and building your systems in a way that addresses the issues raised by the CDEI report. We recommend paying particular attention to:

  • Ensuring that data sets used are high quality, up to date and appropriate.
  • Baking data governance into system development pathways so that ethical issues are considered as a matter of course, and at an early stage.
  • Thinking about how transparency and accountability mechanisms can be improved.
  • Training your people so that all those involved in specifying and implementing data/AI projects are aware of the technical and legal issues.

If you would like to speak to a member of our team about legal and ethical issues around the use of data or artificial intelligence, please get in touch with one of the authors or your usual Osborne Clarke contact.