Workforce Solutions

Agentic AI's impact on the international recruitment process: four tips to prevent fines

Published on 3rd September 2025

Audits, interaction assessments, evaluations and 'guardrails' are steps that can minimise the risk of serious penalties

The development of artificial intelligence (AI) is increasingly focused on autonomous systems, known as "agentic AI", that initiate tasks, adapt strategies and collaborate with other agents or platforms.

Three quarters of US staffing and recruitment companies are reported to be using agentic AI in relation to talent acquisition – and a quarter of those companies say the technology is a "game changer" particularly for talent acquisition and matching. Recruitment company owners are increasingly aware of the big advantage of having systems that can make good connections with candidates after hours and at weekends. 

Additional risk

Agentic AI systems offer enormous opportunities. But they also introduce additional layers of uncertainty and technical risk – which are far beyond the risk associated with the use of generative AI that operates within more contained boundaries.

It's tough to be told by a lawyer that there is something else that is business critical in the UK and EU. This is particularly so given how challenging these markets are for many at the moment, along with the various reforms in the legislative pipeline in the UK and EU. But the growth in the use of agentic AI must be addressed in AI governance programmes as soon as possible: breach of legal requirements for transparency in a recruitment context under EU AI laws and UK and EU data protection laws could lead to EU fines of up to 7% of turnover and UK fines up to 4%. Class actions by disgruntled candidates may also become common. Investors will be aware that the use of AI is increasingly commonplace in recruitment and, given the level of fines, will increasingly expect recruitment companies that they invest in to be on top of compliance in this area. And hirers are starting to ask questions, perhaps fearful that non-compliance by their recruiters will infect them as well.

Stay ahead

So, to stay ahead, it is essential for directors, legal teams and compliance teams to have a clear vision and defined "guardrails". The EU AI Act also requires AI literacy among staff more widely.

Effective AI governance requires attention both to high-stakes applications – such as ensuring the use of AI in talent acquisition and matching is in compliance with the EU AI Act, General Data Protection Regulation and other regulations – and more everyday situations such as the use of AI in finance and payroll to detect anomalies and fraud detection. Governance often fails when employees do not understand the tools they are using or the risks they introduce.

Four steps

Therefore, there are a number of starting steps that can be taken to help minimise the risk of serious fines:

  • Conduct a full audit: begin with a comprehensive audit of your AI tools, including usage behaviours to understand who is using them and for what purposes.
  • Assess interactions: identify how these AI tools interact with your internal systems and external data sources.
  • Evaluate risks: determine which use cases introduce the greatest data exposure or operational risk.
  • Frame guardrails: use these insights to establish governance guardrails and escalation pathways.

For more insight, our report Agentic AI: Why Governance Can’t Wait, which we prepared with the help of some of our clients across a variety of sectors, sets out some further learning points around the safe use of agentic AI. We have also developed a tool that can be used to assess risk and compliance actions in these areas – please let us know if you would like a demonstration.

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?