AI Literacy: a key element in the implementation of EU regulation on artificial intelligence
Published on 26th June 2025
The training of staff who use or develop AI systems has been mandatory for organisations since February

The Regulation (EU) 2024/1689 on Artificial Intelligence, which has been in force since 1 August 2024, marks a regulatory milestone at the international level. Its objective is to promote the development and adoption of trustworthy, safe, and human-centric artificial intelligence (AI) through a risk-based regulatory approach.
Among the provisions that have already begun to take effect as of February, article 4 stands out. It addresses a vital –and often underestimated– element for the responsible implementation of AI systems: the literacy and training of the human capital interacting with these technologies.
This legal provision requires both developers and users of AI systems to ensure that their staff, as well as any individual acting on their behalf, possesses a sufficient level of AI literacy. Although this obligation may seem straightforward at first glance, it has raised numerous practical questions. What does “AI literacy” actually mean? Who is the requirement aimed at? What level of knowledge is considered sufficient?
To provide clarity, the European Commission has recently published a questions and answers (Q&A) document on its website. The Q&As aim to dispel the ambiguity of a legal provision which, while already applicable, presents significant gaps.
Concept of AI literacy
According to the definition set out in the regulation, AI literacy refers to a comprehensive set of capabilities, knowledge and understanding that enables developers, users and other involved individuals to deploy AI systems in an informed manner. This literacy entails developing an awareness not only of the opportunities that AI offers but also of the inherent risks and potential harms it may cause.
In other words, it is not about generic training applicable across an entire organisation nor about a standardised requirement for all sectors. AI literacy must be tailored to the specific context in which the technology is used. The European AI Office has made it clear that no uniform training requirements or mandatory certifications will be imposed in this regard. Instead, it promotes a flexible, context-based approach that considers employees’ technical knowledge, experience, education, and training, as well as the risk level of the AI system.
Obligated parties
It is important to note that article 4 of the AI Act requires AI literacy not only for the employees of developers and user organisations, but also for individuals acting on their behalf. This includes service providers, contractors or even clients, insofar as they are involved in the use or operation of AI systems. The key criterion is not the contractual relationship with the person acting on behalf of the organisation, but rather organisational control.
However, the wording of this provision has sparked some interpretive debate regarding the explicit reference to "staff". Was this mention necessary since the regulation already refers to "persons acting on their behalf" – a category that would, in principle, include employees?
One possible interpretation is that the legislator intended to emphasise that the obligation is not limited to those directly interacting with AI systems but may extend to the entire workforce, given that contact with such tools is –or soon will be– widespread in many work environments. In any case, a more restrictive interpretation could also be argued: since the regulation requires a "sufficient level" of knowledge, it could be contended that if there is no interaction with AI systems, minimal or even no knowledge would meet the requirement. However, this approach is becoming increasingly difficult to defend, as the use of AI becomes a cross-cutting reality in the workplace.
Minimum required content
Although the AI Regulation does not impose a fixed syllabus or a mandatory format, the European Commission has identified a set of essential elements that every AI literacy programme should cover in order to comply with article 4:
- General understanding of AI. Individuals undergoing training should acquire basic notions of what artificial intelligence is, how it works and how it is used within the organization.
- Clarity on the organisation's role. It is essential for staff to understand whether their organization acts as a provider or as a deployer of AI systems, as this determines legal responsibilities.
- Awareness of the level of risk associated with the AI systems used. The training approach should be tailored to the type and level of risk of the system deployed (for example, a corporate chatbot does not require the same preparation as an AI system that makes automated decisions in the healthcare or financial sectors).
- Adaptation to the recipient's profile and sectoral context. Training should be proportionate and adapted to the technical knowledge, previous experience, education and responsibilities of each group (for example, a developer’s needs differ from those of a customer service operator), while also accounting for the sector and purpose of the applicable AI system.
Documentation of the effort
The Q&A document clarifies that article 4 of the AI Regulation does not establish a certification system or standardised training – that is, it does not require personnel to pass specific tests or obtain formal accreditations. However, this does not exempt organisations from taking action. In fact, the regulation establishes that operators must internally document the initiatives they implement to ensure a sufficient level of AI literacy. The key is to be able to demonstrate that a reasonable and proportionate effort has been made, taking into account the context, the AI systems used and the profiles involved.
European AI Office
In this context, the newly created European AI Office that has been established by the Commission will play a central role in monitoring and developing this obligation. It is expected to issue further guidance, host webinars and launch a dedicated webpage on AI literacy, featuring training resources and practical case studies.
One of the first tools it has already rolled out is a living repository of AI literacy practices. This repository showcases real-life examples of how organisations across different sectors are approaching training and awareness in AI. While replicating these initiatives does not, by itself, ensure compliance with article 4, the goal is clear: to foster mutual learning and exchange among AI system providers and deployers.
Osborne Clarke comment
AI literacy should not be seen as a mere formality. Organisations should address this requirement by developing training programmes that match the risk level, autonomy and complexity of the systems in use. Article 4 allows for a degree of interpretation, but it does not exempt organisations from responsibility. In the absence of specific technical standards, the best path to compliance is to adopt a proactive, documented and measurable approach to this training obligation.
With an eye on 2 August 2026 – the date when national authorities will begin actively supervising this obligation – we recommend not waiting for harmonised criteria to emerge. Instead, start now with an internal assessment to identify affected individuals, training needs and monitoring measures.