How should data centre operators manage artificial intelligence compliance risk in the EEA?
Published on 23rd September 2025
The EU’s AI Act brings fresh challenges for data centre operators – and the regulatory regime will continue to develop

The phrase "artificial intelligence" (AI) permeates conversations about technology, business and everyday life. Its revolutionary impact on data processing is undeniable. What is often not appreciated is that the EU's Artificial Intelligence Act imposes obligations on the operators of the data centres that host the processing.
The AI Act and data centres
Against the backdrop of the wave of IT regulation in the European Economic Area (EEA), one might expect that the relationship between AI development and data centres would lead to specific regulations for the data centre market. Surprisingly, EU and other European lawmakers have not focused heavily on data centres in their AI plans.
The EU's AI Act, the first provisions of which came into effect on 2 February 2025, might initially seem to overlook data centres. However, a closer examination reveals that the AI Act does impose relevant obligations—both for data centres using AI solutions internally and for providers and users of AI systems hosted in external data centres.
Although the Act does not apply directly to AI data centres located in the UK, and other countries outside of the EEA, it is important for operators of those centres to be aware of its provisions. If they are hosting AI systems which are used within in the EEA, their EEA customers will expect the centres to be operated in a way which facilitates compliance with the Act, and for their contracts to reflect this.
Data centre operators from other territories should also keep themselves informed of upcoming AI legislation in their countries, in case there is an impact on them. For example, the UK government proposes to bring in AI-specific legislation next year, which is likely to focus on AI safety and security, and may well have implications for data centres in the UK and elsewhere.
High-risk AI systems in data centres
The AI Act categorises AI usage in the EEA into three areas: prohibited, high risk, and everything else. Prohibited AI practices include practices such as subliminal techniques, social scoring, crime risk prediction, and emotion inference. These prohibitions have little direct impact on data centres.
High-risk AI systems, however, may be significant, and are subject to strict and costly obligations. One relevant category for data centres includes AI systems used as safety components in managing and operating critical digital infrastructures (Article 6(2) of the AI Act).
The EU legislators' vague and general phrasing could encompass all AI systems related to data centre security, especially for the largest centres. The Act does not define critical digital infrastructure though Article 3(62) defines "critical infrastructure" by reference to the Critical Entities Resilience Directive 2022/2557.
Recital 55 of the Act provides some clarity:
- "AI systems classified as high-risk for managing and operating critical infrastructures include those used as safety-critical components. Failure or malfunctioning of these components could lead to risks to the physical integrity of critical infrastructure, health, and safety of persons and property. Components solely for cybersecurity purposes do not qualify as safety components. Examples include systems for monitoring water pressure or fire alarms in cloud computing centres."
From this, two conclusions arise:
- Data centres are considered critical digital infrastructure and are affected by high-risk AI system regulations.
- High-risk AI systems in data centres are those related to physical security, not cybersecurity or AI used in cloud services or data processing.
Expanding on Recital 55, data centres must classify AI tools that monitor or control water pressure and fire alarms as high-risk AI systems, and other examples might be systems monitoring for electrical faults, and those controlling temperature and access. This category will not apply to AI systems for network control, virtual machine building, IT environment optimisation, power consumption monitoring, or cyberattack detection.
Responsibilities for operators
While some monitoring and control systems will be based on traditional software, it is likely that an increasing number of data centres will use systems which incorporate an AI element. High-risk AI systems bring extensive responsibilities for data centre operations. Providers of these systems (often the data centres themselves) must:
- Establish, document, and operate a risk management system.
- Meet quality criteria for data sets used in the AI lifecycle (data governance).
- Create and update technical documentation.
- Automatically record events in the AI system lifecycle.
- Ensure transparency and adequate information for users (including operating instructions).
- Ensure adequate human oversight.
- Ensure accuracy, robustness, and cybersecurity.
They must also operate a quality management system and comply with registration and compliance declaration obligations.
Data centre operators of these high-risk AI systems, including data centres procuring these systems from third-party vendors, must:
- Take technical and organisational measures to ensure the AI systems are used according to operating instructions.
- Ensure adequate human oversight.
- Ensure quality regarding data governance (insofar as it depends on the operator).
- Monitor the AI system's operation.
- Store automatically generated event logs.
- Adequately inform the data centre employees about the high-risk AI use.
Low-risk AI systems
The list of high-risk AI and prohibited AI systems is quite narrow, and most AI systems in data centres fall into the lowest risk category. These include AI systems that operate, control, monitor performance or resource consumption, support and ensure security.
The AI Act encourages the use of standards for lower-risk systems similar to those for high-risk systems, especially if the AI system significantly affects organisational stability or essential processes. A common-sense approach based on risk assessment is recommended, with periodic monitoring of the system's operation, and of potential changes in risk classification under the AI Act.
The role of data centres in ensuring compliance
Data centres play a crucial role in ensuring compliance with the AI Act, not just for their own high-risk AI systems but also for the users of the centres. While none of the obligations for AI system providers and users are specific to data centres, many indirectly have an impact on their relationship.
For example, failures in high-risk AI systems may require immediate response capabilities, such as stopping the AI system, implementing workarounds, or launching backup copies. Business continuity, redundancy and backup management are vital in this context.
Data centre properties like physical security, cybersecurity and availability of computing power can influence AI system resilience against external threats, such as cyber attacks or data poisoning.
Growing importance of data centres for high-risk AI systems
Assessing whether data centre relationships should be included in risk analysis is crucial. Data centre operations are important for recording incidents which affect AI systems, as well as for ensuring transparency and accountability, and access to data centre tools can enhance the effectiveness of human supervision of AI systems.
Selecting a secure data centre and arranging relationships appropriately is vital for ensuring AI system accuracy, robustness and cybersecurity — key requirements for high-risk AI systems. The AI Act mandates redundancy and backup plans: data centres are integral to this, and to the implementation of measures to prevent attacks on data sets and data confidentiality.
Properly structuring services within data centre resources ensures high-risk AI systems operate robustly. Malfunctions causing harm to individuals may violate the Act, necessitating appropriate agreements to prevent such issues.
Finally, the AI Act has surprisingly little to say about energy-efficiency and sustainability, beyond obligations on AI regulators to facilitate the creation of voluntary codes of conduct governing the impact of AI systems on environmental sustainability, and energy-efficient techniques for the efficient design, training and use of AI. However, these are issues of great interest to many operators and users of data centres, for ethical and practical reasons.
Osborne Clarke comment
Data centre operators should implement compliance measures for any of their AI systems that deal with physical security issues, and prepare to address customer needs arising from the AI Act, including reviewing their customer and supplier contracts to reflect these requirements.
Investments in data centres should include measures to ensure high compliance with the AI Act. Although current obligations are not extensive, future changes are expected. The European Commission can expand the catalogue of high-risk AI systems, and the growing role of data centres will be a significant factor in this analysis.