Digitalisation

UK Digital Regulation Cooperation Forum invites views on algorithmic processing papers

Published on 25th May 2022

Businesses should seize the opportunity to influence the future of the regulatory approach to algorithmic processing in the UK as the DRCF's deadline for comments is 8 June 2022 

DT_cybersecurity

On 28 April 2022 the Digital Regulation Cooperation Forum's (DRCF) Algorithmic Processing workstream published two discussion papers on algorithmic processing. These were published alongside its 2022/23 Workstream plan, a key theme of which is collaboration between regulators on a variety of projects including improving transparency in the use of algorithms. The first paper discusses the benefits and harms of algorithms and the second considers the landscape of algorithmic auditing and the role of regulators within it. The DRCF invites comments on both of these papers by 8 June 2022.

The DRCF was established on 10 March 2021. (To read more about its inception and previous work, including its creation of a digital regulation research portal, see our previous Insight.)

Algorithmic processing involves the use of an automated system to collect and process data. Artificial intelligence (AI) or machine learning systems are some of the more advanced forms of algorithmic processing, but there are many other data analytics approaches. Algorithmic processing systems are becoming increasingly widespread in modern society across all sectors, for example to identify fraudulent activity in financial services, to connect us with friends at the touch of a button and to translate languages easily.

The benefits and harms of algorithms

In its first paper, the DRCF highlights the importance of taking due care when implementing and managing algorithmic systems. Otherwise there is a risk of amplifying harmful biases that can reinforce inequalities, or of misleading consumers, distorting competition or jeopardising personal privacy rights. 

The DRCF divides these risks into intentional and unintentional harms. An example they give of an intentional harm is when algorithmic processing is used to automate "spear phishing" attacks (where information about an individual is used to create personalised communications intended to trick the individual into disclosing confidential information such as bank account details, or to download malicious software). But the DRCF considers that a far more prevalent risk is presented by unintentional harms. One example of unintentional harm is when algorithms carry and amplify harmful bias. For example, a search engine might only return results that are skewed towards a particular viewpoint. 

These risks make it vital for regulators to understand the landscape of algorithmic processing systems. The paper comments that this will require DRCF members to focus on building their expertise in the following six priority areas:

  • transparency of algorithmic processing;
  • fairness for individuals affected by algorithmic processing;
  • access to information, products and services;
  • resilience of infrastructure and algorithmic systems;
  • individual autonomy to enable informed decision-making and participation in the economy; and
  • healthy competition to foster innovation and better outcomes for consumers.

If the potential risks are addressed then the DRCF believes algorithms can provide a number of benefits such as improved productivity, developing tools for disadvantaged groups and improving methods of managing content. However the paper notes that these benefits are reliant on responsible innovation in how algorithms are deployed, which the DRCF intends to facilitate both in the short term and into the future.

To address these challenges, the DRCF highlights the importance of collaborative working between its members. It proposes to provide businesses with guidelines on best practice for various areas of algorithmic development, including identifying inadvertent harms and improving transparency. 

The paper observes that guidance could provide businesses with advice on how to structure their use of algorithms. A key concept to be developed is the "human in the loop" method of reducing risks in algorithms, ensuring the involvement of a human to assess an algorithm's output. The paper warns that care must nevertheless be paid to the risks that humans might place too much trust in the algorithm they are meant to be overseeing, and might struggle to interpret its outputs. 

Algorithmic auditing paper

In its second paper, the DRCF explores the concept of algorithmic auditing. Algorithmic auditing refers to a range of approaches for reviewing algorithmic processing systems, including how a system operates and how it is used by businesses. Audits can be carried out by regulators, internal teams or third parties. Algorithmic auditing can help avoid some of the potential harms of algorithms while promoting their benefits. This is by closely monitoring the use of algorithmic systems and processes to ensure that risks are understood and can be managed or mitigated.  

The paper identifies a number of issues in the algorithmic auditing landscape. These include a lack of clear standards, inconsistency in applying standards where they do exist and insufficient follow-up action taking place where an algorithmic audit identifies issues.

Consequently, the paper focuses on developing better foundations for algorithmic auditing. It identifies six "hypotheses" for these foundations. These range in scope from requiring regulators to set out prescriptive standards for algorithmic auditing, to a more hands-off approach offering basic guidelines advising internal and external teams on how to carry out algorithmic audits. This might involve the development of accreditation criteria for auditors, but leaving the auditors to decide on the best way to meet them.

Osborne Clarke comment

Although it is likely to take some time for industry standards and regulations to appear from these papers, it is clear that UK regulators are paying attention and will continue to monitor developments in relation to algorithmic processing. Indeed, the DRCF's workplan for the next 12 months includes improving the DRCF's capabilities in algorithmic auditing, researching the third-party algorithmic auditing market and considering how regulators can work to promote transparency in algorithmic procurement.

While the UK is not currently taking the same regulatory approach as the EU with regard to AI – it is not pushing forward with formal regulations similar to the EU's AI Act – the publication of these papers may indicate the intended direction of regulatory travel in the UK. We will know more when the UK government's anticipated White Paper on regulating artificial intelligence is published later this year. 

The DRCF invites comments on both of these papers by 8 June 2022. This presents a prime opportunity for businesses to influence the future of the regulatory approach to algorithmic processing.

If you would like to discuss this article further, please do not hesitate to get in touch with one of our experts, or your usual Osborne Clarke contact.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?