GDPR for HR

UK and EU GDPR for HR | Autumn 2025

Published on 17th November 2025

Data subject access requests and disclosure pitfalls, workplace AI engagement, and sharing pseudonymised data

Close up view on a man typing on a keyboard, working with a desktop and two laptops

Welcome to the autumn edition of our GDPR for HR newsletter, featuring the latest updates, cases and insights on data privacy. 

In this edition, we offer top tips to reduce the risk of accidental disclosure of personal data and considerations when transferring pseudonymised data. We also highlight survey findings on barriers to successful adoption of artificial intelligence (AI) in the workplace that are a reminder of the importance of not overlooking employee engagement.

How to reduce the risk of accidental breaches when disclosing DSAR data

Information that is hidden or embedded in documents presents a risk to organisations when faced with a request to provide information to individuals or to the public generally.

In the context of a data subject access request (DSAR), examples of hidden personal information typically include names of authors of documents, comments and edits, as well as information formatted as "invisible" or the same colour as the background and hidden columns in spreadsheets. 

Unintentional disclosure of commercial information hidden within documents could also be damaging to an organisation for other reasons. Without due care, this type of hidden data can easily slip through the review and redaction process, with third-party personal information unwittingly disclosed and the organisation having to deal with a data breach as well as the DSAR. The Information Commissioner's Office has published practical guidance for organisations on how to minimise this risk, including:

  • Invest in and use software tools (such as Document Inspector) to locate and remove hidden personal information or convert complex documents into a simple format (such as txt or csv files) to reveal all displayable information.
  • Look out for signs that a document may contain hidden data, such as a file size that is larger than expected.
  • Consider copying and pasting an image of the document into a new document (using the "paste special" feature) to leave behind unwanted embedded information.
  • Avoid ineffective redaction techniques such as use of black markers or simple image-editing tools (such as covering information with black rectangles).
  • Review redactions before disclosure to check they are effective.
  • Keep records of what information has been redacted, by who and the reason why.

Of course, underpinning all of the above is the need to ensure that staff involved in the disclosure of information to the public, including DSAR data, have appropriate data protection training, so they are confident as to the what, when and how of redaction and how to identify and report data breaches so that they can be proactively managed and mitigated. 

The biggest barrier to AI adoption in the workplace may be human rather than technical 

When looking at factors that may determine whether adoption of AI is successful in the workplace, employee engagement may be just as important as the technical capabilities of the AI itself. This was the conclusion in March this year of the Writer and Workplace Intelligence survey of generative AI adoption in the enterprise and the attitudes of 1600 senior executives and employees from a variety of sectors in the US. Nearly a third of respondents admitted to behaviour that the report described as sabotage of workplace AI, including refusal to use AI or under-utilisation of AI features, tampering with performance metrics and use of unapproved tools. 

With nearly half of the respondents believing that use of AI in their workplace was damaging cohesion, the survey also identified significant concerns that some respondents had about AI. These included fears that their roles might be replaced by AI, doubts around adequate quality or security of the particular AI implemented in their workplace and the perception that AI made them/their work feel less valuable and creative. 

What do these findings mean for employers looking to introduce or increase its use of AI in the workplace? Employment engagement and reassurance is essential: in practice, this means finding ways to involve employees in the change process rather than simply looking to impose AI upon them. 

For example, employees can be shown how AI will help support rather than replace them in their role. The provision of adequate training also allows employees to know how to use relevant AI tools, who to go to with queries, where to seek out feedback and how to respond to any concerns. This will all help to foster a sense of trust and of working together, so that employees and employers can reap the benefits that AI has to offer in the workplace. 

What obligations do employers have when processing pseudonymised personal data? 

Pseudonymisation is a technique that involves swapping personally identifying information (such as names) with a key or code. Only those with access to the latter can reverse the anonymisation process. This approach is commonly used by HR; for example, employee surveys and names are often replaced with a number or code. 

This reduces some of the risks around processing of personal data but there can be confusion around the extent data protection obligations still apply. The recent ruling of the Court of Justice of the EU in Single Resolution Board v the European Data Protection Supervisor provides clarification of the concepts of personal and pseudonymised data and the implications for data sharing, as summarised by our team of data protection experts. The court shed light on what employers will need to consider when disclosing pseudonymised data, including the implications for privacy policies, consent, DSARs and the legal basis for data transfer.  

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?