European Commission to clarify providers’ obligations for general-purpose AI models under the EU AI Act
Published on 26th May 2025
A set of guidelines and a code of practice will supplement the AI Act for a harmonised, risk-based regime for GPAI models

The European Union’s Artificial Intelligence (AI) Act establishes a harmonised regulatory framework for the use of transparent, safe and trustworthy AI in Europe. As obligations for providers of general-purpose AI models (GPAI) shall apply from 2 August 2025, the European Commission has asked for input from stakeholders to clarify the scope of obligations for providers of these models, as well as to address both regulatory gaps and practical challenges.
GPAI definition and regulatory challenges
General-purpose AI models are defined in the AI Act as displaying significant generality, capable of competently performing a wide range of distinct tasks and intended to be integrated into various downstream AI systems or applications. These models, often trained on vast and diverse datasets using self-supervised learning, underpin many advanced AI services, from language generation to multimodal content creation.
The very flexibility and foundational nature of GPAI models create unique regulatory challenges:
- Diffused responsibility. Providers of GPAI models often do not control how their models are adapted or used downstream. This makes it difficult to allocate clear legal responsibility for compliance and risk management.
- Complex provider identification. When a third-party modifies or fine-tunes a GPAI model, it may become a “provider” under the AI Act, but distinguishing between a user and a provider is not always a straightforward matter.
- Rapid technological evolution. GPAI models continuously improve in capability and scale, complicating policymakers’ work to establish fixed regulatory thresholds to effectively capture models of this kind.
- Transparency limitation. The large and diverse datasets used for training GPAI models are often proprietary or opaque, raising challenges for ensuring compliance with copyright laws and assessing biases.
AI Act’s approach to GPAI models
The EU AI Act sets clear obligations for providers of GPAI that place these models on the EU market or put them into service, regardless of whether they are established inside or outside the EU.
The notion of “provider” for the purposes of the AI Act include original developers and those who substantially modify or adapt a GPAI model before market placement. Non-EU providers must appoint an EU-based authorised representative.
Core obligations for providers of GPAI models include:
- Maintaining detailed technical documentation on model design, training data, testing, and evaluation, accessible to authorities and downstream users.
- Publishing a clear summary of the training datasets to enhance transparency.
- Implementing policies to ensure compliance with EU copyright laws regarding training data, especially in relation to the exercise of data-mining-opt-out rights. For GPAI models identified as posing systemic risks – meaning that their scale, capabilities, or market impact could cause significant harm at Union-level – providers would be subject to additional requirements, including risk management, incident reporting and specific cybersecurity measures.
Notably, open-source GPAI models are exempt from some documentation and transparency obligations unless they are classified as systemic-risk models, acknowledging the importance of open innovation while maintaining safeguards. Also, AI systems in pure research and development phases are generally exempt until placed on the market.
Lastly, providers must share relevant information with downstream users, who are responsible for complying with AI Act requirements applicable to their AI systems. The forthcoming voluntary GPAI Code of Practice aim to offer detailed guidance on meeting these obligations.
Aspects of the AI Act addressed by the guidelines
The AI Act lays down rules for GPAI models, but certain aspects could remain unclear or open to interpretation. The Commission’s forthcoming guidelines are expected to clarify these aspects to ensure consistent and practical application of the AI Act.
- Definition and classification of GPAI models. The AI Act’s qualitative definition of GPAI models as those with “significant generality” and capable of performing a “wide range of tasks” lacks precise measurable criteria. The guidelines intend to address this issue by introducing training compute thresholds as a practical proxy to determine when a model qualifies as GPAI. They would also explain how to apply this threshold and when it can be rebutted based on the model’s actual capabilities.
- Clarifying provider status. The AI Act does not clearly specify when a downstream actor modifying or fine-tuning a GPAI model becomes a “provider” subject to obligations. The Guidelines would provide for detailed criteria to identify who qualifies as a provider, helping to delineate responsibilities along the AI supply chain and reduce legal uncertainty.
- Scope of obligations and exemptions. The AI Act exempts open-source GPAI models from certain documentation and transparency requirements but lacks detailed guidance on the application of this exemption. The guidelines would clarify the conditions under which open-source models are exempt and when systemic-risk obligations still apply, offering practical advice on compliance.
- Estimating training compute. The AI Act references training compute as a factor but does not specify how providers should calculate or estimate it. The guidelines would offer concrete methodologies and examples for estimating training compute, enabling providers to assess their models consistently.
- Transitional and "grandfathering" rules. The AI Act includes transitional provisions for existing GPAI models placed on the market before its entry into force. Providers of GPAI models that have been placed on the market before 2 August 2025 will need to take the “necessary steps” to comply with the AI Act’s obligations by 2 August 2027. The guidelines would explain how these transitional rules apply in practice, assisting providers in managing legacy models.
- Supervision and enforcement. While the AI Act designates the European AI Office as the sole supervisor for providers of GPAI models, the enforcement mechanisms and cooperation with national authorities are not fully detailed. The Guidelines would outline the supervisory role of the AI Office and how it will coordinate enforcement to ensure uniform application across the EU.
Guidelines limitations
While the guidelines will provide clarity, several limitations remain:
- Non-binding status. As soft law, the Guidelines will not be legally enforceable but will provide for steers to guide the Commission’s (and that of the European AI Office) supervisory approach. This approach should also be followed suit by national authorities.
- Rapid technological evolution. The thresholds and criteria may require frequent updates to keep pace with advances in AI.
- SME impact. Compliance requirements, particularly around documentation and risk management, may be burdensome for smaller-sized providers, despite efforts to simplify assessment (for example, use of training compute).
- Enforcement detail: The guidelines are not expected to fully resolve questions around enforcement mechanisms or penalties for non-compliance.
Osborne Clarke comment
Together with the GPAI Code of Practice, the forthcoming Commission guidelines will supplement the AI Act in order to achieve a harmonised, risk-based regulatory regime for GPAI models in the EU. Both are expected to be published by end of May 2025.
By clarifying definitions, compliance pathways and supervisory mechanisms, the guidelines aim to reduce legal uncertainty and foster responsible AI innovation. However, issues such as the protection of intellectual property rights related to training data, as well as clear and well-defined criteria and exceptions for determining regulatory obligations based on the model or its modifications, should be explicitly addressed in the Guidelines. In any case, their non-binding nature and the dynamic landscape of AI technology mean that ongoing adaptation and legal vigilance will be essential.