Employment and pensions

AI's growth raises immigration considerations for UK government and employers

Published on 27th Oct 2023

The EU's AI Act categorises immigration as a high-risk area for AI but there is no equivalent legislation yet in the UK

Close up of a person walking and holding a suitcase

The potential of artificial intelligence (AI) has become an increasing focus for government and businesses in 2023 – and AI technology is likely to play a larger part in immigration processes in the future both in terms of Home Office processing and employer decisions.

As the UK government appears to favour guidance over regulation in this area, it is particularly important for employers to be aware of potential risks and benefits related to AI.

AI specialist jobs

The progression of AI is likely to increase the demand for tech specialists in this area. These AI-related jobs will be eligible for sponsorship under the skilled worker visa. They are currently also recognised on the shortage occupation list under the categories 2136 "programmers and software development professionals" and 2137 "web design and development professionals".

However, the Migration Advisory Committee's review of the shortage occupation list at the start of October that recommends abolishing the list entirely and retaining a single "going rate" for jobs regardless of whether there is a shortage. If these recommendations are accepted, employers would not be able to pay under the going rate for AI specialist jobs when recruiting internationally. 

Discrimination risks

It may be tempting to use AI in recruitment. Amazon previously used an AI recruitment tool to sort through candidates' job resumes. The tool had to be eventually scrapped when it was found that applications from women were negatively compromised. The data the tool had been trained on had taught the AI to favour men's resumes over women's, because applications from men were more common and more successful.

There are similar risks of discrimination in an immigration context. Applications from people who reveal a non-British nationality or non-UK address could be downgraded. This could be because their nationality is less common or, even, if the tool recognises the additional immigration costs and admin involved. To avoid this, there needs to be a solid anti-discrimination policy and appropriate safeguards in place.

Automate with caution

While there is potential for some functions of HR professionals to be automated, this should be done with extreme caution. The authorising officer retains personally responsibility for use of the sponsorship management system. In the event of an audit, they should be able to demonstrate they understand their responsibilities and are fully aware of where relevant HR documents are stored. 

Although it has not been tested, it seems highly unlikely that the use of an AI tool would be a defence for fines or the downgrading of a licence – at least, when the functions of the AI tool are not being carefully monitored.

AI is also likely to be increasingly involved in Home Office decision-making in the future, but has been used in the past with varying degrees of success. In 2020, a streaming tool that wrongly identified applications as "high risk" was scrapped. In 2014, voice recognition technology concluded most English language tests reviewed for cheating were suspicious, causing thousands of visas to be cancelled.

Discriminatory algorithm scrapped

In August 2020, the Home Office scrapped a decision-making algorithm which had been used to filter visa applications since 2015, after a judicial review claim was set to challenge its legality.

The claim, made by the Joint Council for the Welfare of Immigrants, was on the basis that the algorithm was racially discriminatory. It had been assigning applicant's a colour based on a "traffic light" system, leading to some applications being placed in a "fast lane" and some being more rigorously scrutinised on the basis of characteristics including race, nationality and address. 

The Home Office announced when withdrawing the algorithm that it would be looking into issues of unconscious bias to avoid the same situation arising in the future. A variation of the algorithm was used for the EU Settlement Scheme to categorise applications into traffic-light categories and determine the level of scrutiny of the application by a caseworker thought to have appropriate skills, profile and experience' for the traffic-light categorisation.

English-testing chaos

Applicants under the Skilled Worker route need to show an English competency at B1 or above, unless they are exempt. One of the ways of proving this is via an approved English test.

In 2014, a voice recognition tool from the Educational Testing Service, commissioned by the Home Office, reviewed approved English tests between 2011 and 2014 to detect whether there had been cheating and concluded that 97% of tests were suspicious.

Thousands of visa holders were investigated and had their visas cancelled. Many of these people were unable to challenge the cancellations but maintained their innocence. Many were able to prove their innocence; however, this situation resulted in chaos for many.

Osborne Clarke comment

There is precedent for the Home Office to use AI as a decision-making aid, particularly as the introduction of electronic travel authorisations lead to an influx of applications – but, historically, it has been secretive about doing so.

While the EU's AI Act has categorised immigration as a high-risk area for AI, there is currently no equivalent legislation in the UK. We would hope that any tool is carefully developed, with appropriate oversight, and that there is still potential for caseworker discretion if- and when- it does not work exactly as intended.
 

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?