Data protection law challenges in Europe when using AI services
Published on 6th Feb 2024
Companies must consider the legal challenges, including data protection, posed by AI services such as Microsoft Copilot
Artificial intelligence (AI) services such as Microsoft 365 Copilot, a generative AI-powered technology, aim to help users with a variety of tasks such as drafting documents or summarising meeting notes. Since Copilot has been made available to the general public, companies have begun to evaluate use cases for such AI services.
As part of the decision to enable Copilot for employees (or only certain employee groups), companies also need to consider the legal challenges – including data protection law related risks – associated with using such AI services. Only by addressing these challenges can companies maximise the cutting-edge benefits provided by these AI services.
Analysis of services and understanding to-dos
While companies can already anticipate the impact of upcoming regulations such as the AI Act, current laws such as the General Data Protection Regulation (GDPR) and intellectual law already require a risk analysis. As with all legal challenges, the specific risks from a data protection law perspective vary depending upon the specific service and use-case at hand. Therefore, companies should first assess the following issues:
- Which AI services does the company want to use? Does the company want to enable the use of generally available, free AI services, purchase specific AI services to enhance software already used within the company or develop its own company-internal AI service?
- Which commercial terms cover the use of the AI service?
- Is the AI service covered by data protection law terms (such as the Microsoft Products and Services Data Protection Addendum? How are data protection law-related issues covered in this agreement?
- How transparent is the available information about the services (for example, what implications does the use of Bing have when this service is enabled within Copilot; has Microsoft implemented data protection law principles concerning the training and use of Copilot; does Microsoft provide sufficient misuse controls to prevent data protection law violations and how can companies avoid surveying their employees through Copilot reporting systems)?
- How does the company assess the changing nature of the selected AI services (for an overview of future Copilot updates see)?
Transparency and accountability are fundamental principles of data protection law. However, the functionality of AI tools such as Copilot is often complex and not fully transparent to companies considering enabling such AI services. This can make it challenging to understand how personal data is processed and to ensure compliance with data protection obligations. To address these remaining questions, it is often helpful to discuss these aspects directly with the service provider.
Data protection law requires that organisations using AI services such as Copilot document how they have addressed data protection law related challenges. The great benefit of carrying out this analysis within a data protection impact assessment is to fully document the company’s understanding of the AI service and chose sufficient risk mitigating measures which ultimately benefit the employees and others who might be affected by the use of AI services. Depending upon the AI services chosen, the actions required could include:
- Ensuring through AI guidelines and employee training that employees only use the Copilot output as the basis for their work. From a data protection law perspective companies should not use such AI services as the sole basis of decisions which could affect their employees or customers (this could otherwise constitute an unlawful case of automated data processing in accordance with article 22 GDPR). Ideally, these AI guidelines also cover further data protection principles (such as data minimisation) and instruct employees to avoid using sensitive data within the AI services.
- Reviewing the rights and roles concept within the Microsoft 365 environment since Copilot accesses all data which employees can access via the Microsoft 365 environment. Our experience shows that reviewing the rights and roles concept is key. Often analysing these concepts during the Copilot implementation process uncovers gaps since the rights and roles concept should already have been established within the Microsoft 365 applications.
- Using the assessment of the services to explain to employees and others (for example, customers) how AI services process personal data within privacy policies.
- Providing employee training on the use of AI systems (for example, to understand the implications of entering incorrect data into Copilot and the need to review the output provided by Copilot since it is not necessarily factually correct).
- Monitoring the legal situation. Due to the relevance for many companies, data protection authorities are currently publishing guidelines on the use of AI services which companies can include in their internal assessments. Additionally, the AI Act is on the horizon and companies can already include the implications of the future law in their choice of AI services and use cases.
- Given the vast capabilities of AI services, employee surveillance scenarios are easily conceivable. Therefore, companies with a works council are obliged to involve the works council before introducing AI services.