Digitalisation

Retailers adopt facial recognition technology as regulators play catch-up

Published on 21st Apr 2022

Automated facial recognition raises broad challenges for privacy regulators but industries such as retail are using the technology ahead of specific legislation

Two robots welding

Automated facial recognition (AFR) is already being used widely as a tool to improve safety and assist in operational processes, with some of the better known and accepted uses including to unlock mobile phones and to scan and pass faces at passport gates in airports.

Sitting under the umbrella term of artificial intelligence (AI), AFR is a technology that detects the likeness of a face to an image by converting the image into a numerical expression via computer generated filters. Once the data has been processed, the AFR algorithm provides a "similarity score" which evaluates the likeness in the two facial patterns in order to verify an identity or to place the individual in a specific group.

With a demand for hands-free shopping and assessing foot traffic due to the Covid-19 pandemic it is unsurprising that the retail industry has joined others in the implementation of AFR. There are notable benefits for both customers and employees such as identifying loyalty-club members as they enter stores, automated self-service checkouts where shoppers "pay with their face" and assessing employee productivity.

However, the use of AFR is controversial, not only from an ethical perspective due to the ability to track and identify people without their consent, but also within a discriminatory context as the facial image databases which AFR depend upon do not always accurately represent the demographics of the general population. 

Deploying AFR: the legal challenges

With little to no legislation expressly governing the use of AFR, there are a number of legal issues concerning its deployment. A few examples are:

  • GDPR and lawful processing. AFR systems generally involve the processing of personal data that is regulated by the General Data Protection Regulation (GDPR) and the Data Protection Act 2018. Under article 6 of the GDPR, there needs to be a lawful basis for processing this data. Additionally, any biometric data processed by the AFR will constitute "special category" data under the GDPR; in which case, the processing of it cannot be lawful unless an article 6 requirement is met and one of the exemptions in article 9 apply (for in-stance substantial public interest). The South Wales Police case, as mentioned in the human rights section below, made clear that organisations adopting AFR need to recognise the importance of minimising the retention of biometric data.
  • Discrimination. As has been widely publicised, AI systems can be biased and therefore cause discrimination. This is a particular concern for AFR systems that rely on characteristics such as ethnicity and gender to identify an individual. Data ethics and law are becoming increasingly entwined. AFR users need to consider the quality and quantity of datasets that algorithms are trained on, as well as how they can demonstrate that the technology will not make discriminatory decisions.
  • Human Rights. The Court of Appeal ruled in R (Bridges) v Chief Constable of South Wales Police & Ors that the use of AFR technology by South Wales Police breached privacy rights under article 8 of the European Convention on Human Rights (ECHR), with the judgement having a broad significance for businesses and organisations a looking to adopt the technology. Human rights considerations need to be taken into account when balancing the rights and freedoms of individuals with the benefits that AFR could bring.
  • AI 'black box' transparency issues. AFR systems use machine learning and AI technology that can create issues regarding transparency. This is due to the "black box” issue of not knowing why a particular output has been generated and is inherent in the way in which ma-chine learning systems function. They are set up to self-adjust and self-calibrate on an ongoing basis with each piece of data that is passed through them. They use a maths-based process, fundamentally different to human logic and reasoning, and therefore not simple to ex-plain. Organisations using AFR should therefore seek, as far as possible, to ensure that the machine learning technology they use is sufficiently transparent in terms of usage to enable their operation to be explained to individuals.
  • DPIAs. Under the GDPR an organisation must conduct a Data Protection Impact Assess-ment (DPIA) for any "high risk processing", which is likely to include use of AFR. Organisations that deploy AFR, therefore, ought to have in place a detailed operational-policy document covering its use, as well as to have conducted and documented a rigorous and comprehensive DPIA.

The Surveillance Camera Code of Practice

In light of the South Wales Police case, the UK government announced a consultation on the first revision to the Surveillance Camera Code of Practice. There has been widespread media coverage of the proposed revisions which include the use of facial recognition camera systems by local authorities and the police. 

EU draft AI Regulation

 After extensive consultation, the European Commission has unveiled its proposed regulatory regime for AI. The legislation envisages a full regulatory framework, including new EU and national bodies with strong enforcement powers, and will place heavy fines on businesses for non-compliance. It is shaped around the level of risk created by different applications of AI and so its impact will differ for various developers and providers of AI.

As was widely anticipated, the Commission has mostly outlawed the use of real-time automated facial-recognition systems in publicly accessible places by public authorities for the purposes of law enforcement.

Exceptions to the ban include a targeted search for a specific potential victim of crime, preventing a specific, substantial and imminent threat to life or a terrorist attack, and certain situations where there is a search for a criminal suspect or perpetrator. These systems can only be used subject to safeguards, limitations (including time and geography of use), the requirements of proportionality and necessity, and (usually) with prior judicial authorisation.

However, in November 2021, the European Council, under the Slovenian presidency published its compromise text on the EU's AI Regulation with a notable change being that "general purpose" AI should not fall within those AI systems banned by the Act. General purpose AI are those systems that are able to perform generally applicable functions such as image recognition – a term wide enough to include AFR.

This is not the final text of the Council, with a review of the public consultation currently taking place by the European institutions.

A UK AI regulation? 

The UK government published its National AI Strategy on 22 September 2021. This strategy states that the government aims to create a “pro-innovation” regulatory environment that will make the UK an attractive place to develop and deploy AI technologies, all the while keeping regulation “to a minmum”.

While the UK government has not set out its detailed proposals yet, we are expecting more detailed plans, as to how AI will be regulated in the UK in the forthcoming AI Governance White Paper, due to be published sometime in 2022.

The EU vs UK view: a different regulatory approach?

Privacy watchdogs across Europe appear to be using the GDPR to regulate fast-developing AFR technology, rather than waiting for the EU AI Regulation to be passed. For example, in Sweden the Datainspektionen fining a High School Board SEK 200,000 for its use of AFR.

Until the EU AI Regulation comes into force (with most commentators expecting this to land, at the earliest, in 2025), the lacuna of specific laws governing the use of AFR means that it will be left to regulators to establish what is acceptable within the existing legal frameworks.

While there are currently no detailed proposals from the UK government (as to an AI regulation), the UK's National AI Strategy signals that the UK will not prohibit controversial uses of AI, and instead stresses the economic benefits of rolling out the technology across the private sector.

As the regulatory frameworks emerge, it will be interesting to see whether the government choose AFR (or, indeed AI more generally) as an area in which the UK will diverge from the rest of the EU. In the meantime, the rapid adoption of AFR by companies means that some uses of AFR are already becoming established.  

Ben Hillier helped co-author this Insight

We have a number of experts that can provide use-case regulatory assessments and practical legal advice for organisations making use of biometrics, algorithms, machine learning and AI in general.

 

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?