GDPR for HR | Data protection guidance on artificial intelligence
Published on 2nd May 2023
Welcome to this month's edition of our GDPR for HR newsletter, bringing you a snapshot of developments, cases and insights relating to privacy in the workplace
Guidance on AI and data protection
On 15 March 2023, the UK Information Commissioner's Office (ICO) published an updated version of its guidance on artificial intelligence (AI) and data protection. This aims to provide organisations with updated guidance around the best practices and measures that must be adopted to ensure that their use of AI aligns with data protection laws.
- Accountability and governance. New content in this chapter outlines the steps that organisations should take when developing and implementing AI systems to comply with data protection laws and, in particular, what organisations should consider when conducting related data protection impact assessments (DPIA). The updated guidance states that when conducting a DPIA, organisations should ensure that evidence is included to demonstrate "less risky alternatives" were considered and reasoning on why those alternatives were not pursued.
- Transparency in AI. This new chapter emphasises the importance of providing clear information to individuals about how their personal data is being used to make decisions using AI. The ICO suggests that organisations should include the purpose for processing personal data, the retention period, and who the data will be shared with as part of their privacy information. The ICO also notes that, where data is collected directly from individuals, privacy information must be provided to those individuals before being used to train a model or apply the model on those individuals. This new chapter should be read in conjunction with the ICO's existing Explaining Decisions Made with AI product.
- Lawfulness in AI. This new chapter sets out to ensure lawfulness in the use of AI and data protection. It includes advice on how to identify the legal bases for processing personal data, including obtaining consent, fulfilling a contract, a legitimate interest, or complying with a legal obligation. It also discusses the importance of conducting DPIAs, transparency and accountability and ensuring data accuracy and security in AI systems.
- Fairness in AI. This new chapter aims to protect individuals from direct and indirect discrimination, whether generated by a human or an automated decision-making system. It includes information on:
- Data protection's approach to fairness and how it applies to AI.
- The difference between fairness, algorithmic fairness, bias and discrimination.
- High-level considerations when thinking about evaluating fairness and inherent trade-offs.
- Technical approaches to mitigate algorithmic bias.
- Key questions to ask when considering automated decision-marking and relevant safeguards.
- Annex A: fairness in the AI lifecycle. This new annex has been implemented to address various fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It discusses how fairness can be impacted by the fundamental aspects of AI development. It also explains the different sources of bias that can lead to unfairness, as well as possible mitigation steps. Ultimately, the goal of the guidance is to encourage organisations using AI to prioritise fairness and ensure that their systems do not unfairly discriminate against individuals or groups.
The ICO's updated guidance supports the UK government's vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI. You can read more about the UK's newly published white paper on AI in our Insight.
In the news
A crucial learning experience': ICO calls for highest standards in HIV services after NHS Highland reprimand
On 30 March 2023, the ICO issued a formal reprimand to NHS Highland for a "serious breach of trust" after a data breach involving those likely to be accessing HIV services. This involved an email to 37 people likely to be accessing HIV services, inadvertently using CC (carbon copy) instead of BCC (blind carbon copy). The error meant recipients of the email could see the personal email addresses of other people receiving the email, with one person confirming they recognised four other individuals, one of whom was a former sexual partner.
This case highlights the importance of employers taking steps to reduce the risk of accidental employee data breaches, which often occur due to human error or a lack of understanding about security protocols. To minimise the risk, employers should provide regular training to employees on how to handle sensitive personal data (such as health data), including best practices for password management, data encryption, and secure file sharing. Employers should also implement strict access controls, limiting employee access to sensitive information only to those who need it for their job duties. Employers should also implement a clear protocol for reporting and addressing any suspected breaches.
Ultimately, it is the responsibility of both employers and employees to prioritise data security in the workplace, but, by taking proactive steps to reduce the risk of accidental data breaches, companies can help protect themselves and their clients from potentially devastating consequences.
An update on employment law challenges in 2023
On Tuesday 21 March 2023, we hosted our annual Employment Law Conference, where a team of Osborne Clarke international lawyers discussed the evolving employment landscape in jurisdictions including France, Belgium, Italy, Poland, Germany, the Netherlands and the UK, identifying common trends for employers operating across jurisdictions, including new laws focused around remote working, the hurdles faced by employers in attracting and retaining talent, and legislative initiatives to support employees at work. If you missed it, please do contact us for a recording and read more about it in our Insight.