Artificial intelligence

Employers using AI to recruit skilled workers need to stay up-to-date in Germany

Published on 31st Jul 2023

What employment law and discrimination challenges are raised when artificial intelligence is used to recruit in Germany?

Close up view on a man typing on a keyboard, working with a desktop and two laptops

The use of artificial intelligence (AI) has become indispensable for employers in their search for and recruitment of suitable employees, amid the demand for labour internationally. A majority of the HR managers are actively using or at least considering using AI-based technologies in their daily HR work. While in some places local rules are already in place to regulate the use of AI in hiring decisions (for example, the NYC Local Law 144), the situation remains unregulated in many countries around the world. What is the situation in Germany?

AI in recruiting in Germany

AI technologies have already found their way into the various phases of recruiting, from the job description and posting to the search and screening of applicants and the interview process and the selection and hiring of the right "fit". 

Big corporations have used "Robot Vera", an AI-controlled recruiting robot, for a number of years now. According to Robot Vera's co-founder, it conducted 50,000 job interviews in one day in 2018. 

Other companies are also developing AI for recruitment. The US-based startup Moonhub is building the world's first end-to-end AI recruiter via a platform, as part of the human resources infrastructure. It aims to scale the teams of fast-growing companies and to enable companies to find, hire and retain talent with the right skills. Moonhub also works with large language models such as ChatGPT.

AI-driven recruiting no longer takes place only in the tech bubble of Silicon Valley. The Berlin startup promises to "defeat the shortage of skilled workers in the modern recruiting process". Empion uses a "robo-advisor" that artificially generates questions to create an individual profile of applicants. With the help of an algorithm, Empion aims to bring applicants and companies together as suitable "fits" and faster than via classic job portals.

New AI systems are being launched now on a day-to-day basis. It may, therefore, be very well true what the managing director of human resources at T-Systems International stated in April 2023: "Large-language models, like ChatGPT and other AI chatbots, will replace many HR services."

'Discriminatory' intelligence risk

There is a fine line between artificial and "discriminatory" intelligence. From an employment law perspective, the use of AI, not only in recruiting but also in the entire day-to-day operations of a company, harbours a number of risks that need to be considered and, in particular, localised and prevented. In this respect, the German Anti-Discrimination Act (AGG) forms the boundary in the process of machine decision-making.

Specifically, the use of AI in recruiting carries the risk of algorithmic bias and discrimination. AI is driven by countless data and machine learning teaches itself based on the data. The output of the AI – in recruiting, therefore, the setting of the appropriate "fit" – depends on the data quality with which the AI is fed. Algorithmic biases can result from using unrepresentative data (for example, a data set that is not sufficiently diverse and representative) to train AI systems. "Bad" or "wrong" data can quickly lead to discrimination in the application process and to potential claims for damages and compensation against employers.

Section three of the AGG distinguishes between direct (paragraph one) and indirect discrimination (paragraph two). Direct discrimination occurs when a person is treated in a disadvantageous way in relation to another person in a comparable situation – through active action or omission – because of a ground standardised in section one AGG.

Indirect discrimination risks 

The danger of indirect discrimination for employers according to section three, paragraphs two AGG is particularly risky, as an AI-based recruiting tool is at first glance linked to a neutral data set, which on closer inspection is disproportionately more often tailored to members of a specific group of people. Applicants can be indirectly classified and discriminated against according to section one AGG, be it on the basis of their race or ethnic origin, gender, religion or belief, disability, age or sexual identity.

Such indirect discrimination can be proven by statistical records and visible correlations between a supposedly neutral criterion and the "discriminatory" output. In the case of age discrimination, in the view of the German Federal Labour Court, it is not even necessary to have statistical proof that a certain criterion actually disadvantages a certain age group. It is sufficient if the criterion is typically suitable for this.

Nevertheless, in case of doubt, it will be extremely difficult to identify indirect discrimination in retrospect when using AI. A discriminatory output alone, according to which persons or groups of persons are over-represented or under-represented or comparatively more burdened due to a reason mentioned in section 1 AGG, is not sufficient to be able to conclude indirect discrimination within the meaning of section 3 (2) AGG. 

However, it must be proven which of the criteria, which at first glance seem neutral, have been causally linked by the AI for the differentiating output. In this respect, many AI-based decisions represent a "black box" through which employers may have little or no insight into the reasons for a certain decision and are not able to make a statement about which applied characteristic has led to a discriminatory effect.

Against this backdrop, the call for a "trust based" or at least legally regulated AI is becoming louder and louder.

AI regulation in the EU

The EU, in order to prevent discriminatory risks, is moving forward with arguably the most detailed, developed and wide-ranging proposal – its AI Act. This will most likely take a tiered, risk-based approach. AI applications that pose an unacceptable risk of harm to human safety or to fundamental rights will be banned. High-risk (but not prohibited) AI will be subject to detailed regulation, registration, certification and a formal enforcement regime. Lower-risk AI will mainly be subject to a transparency requirement. Other AI applications will be unregulated.

In addition, a draft of the AI Liability Directive (COM/2022/496) aims to extend the national frameworks of the EU member states to include rules on fault-based liability of providers and users of AI systems and to harmonize the liability rules for AI in the EU for the first time.

Osborne Clarke comment

Employers operating in Germany – and in Europe in general – are well advised to understand and stay up-to date with the systems. 

When using AI in everyday business and especially in recruiting, this will mean carrying out a quality control of the data sets, both purchased or internally generated, or outsource this control.

It will also require following the development of the AI Act and other regulatory changes, as the question of who will ultimately be liable in the event of non-compliance is still open.

There are exciting times ahead of us all – stay tuned!

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?