
GettyImages © Khanchit Khirisutchalual
The aim is to provide legal certainty to actors across the AI value chain by clarifying when and how they are required to comply with these obligations.
These guidelines are part of a broader package tied to the entry into application on 2 August 2025 of the EU-wide rules for providers of general-purpose AI models. They complement the General-Purpose AI Code of Practice that the Commission received from independent experts on 10 July.
General-purpose AI models play a significant role in encouraging innovation and uptake of AI within the EU, as they can be used for a variety of tasks and integrated into a wide array of AI systems. For this reason, providers of such models have certain obligations under the AI Act.
These obligations include giving information to providers of AI systems intending to integrate the model into their AI systems and putting in place a policy to comply with European copyright law. In addition, providers of the most advanced or most impactful general-purpose AI models, namely those presenting systemic risks, are subject to the additional obligations to assess and mitigate these systemic risks. Systemic risks include risks to fundamental rights and safety, and risks related to loss of control over the model.
These guidelines clarify the scope of the obligations and to whom they apply. The guidelines focus on four key topics:
- General-purpose AI models: An AI model is considered to be a general-purpose AI model if it was trained using an amount of computational resources (‘compute’) that exceeds 10^23 floating point operations and if it can generate language (whether in the form of text or audio), text-to-image or text-to-video.
- Providers of general-purpose AI models: The guidelines outline the concepts of a ‘provider’ and of ‘placing on the market’ and clarify when an actor modifying a general-purpose AI model is considered to become a provider.
- Exemptions from certain obligations: The guidelines clarify under what conditions providers of general-purpose AI models released under a free and open-source license and satisfying certain transparency conditions may be exempt from certain obligations under the AI Act.
- Enforcement of obligations: The guidelines explain the implications for providers of general-purpose AI models that choose to adhere and implement the General-Purpose AI Code of Practice, and outline Commission expectations regarding compliance as from 2 August 2025.
Next steps
The AI Act obligations for providers of general-purpose AI models enter into application on 2 August 2025. From that date onwards, providers placing general-purpose AI models on the market must comply with their respective AI Act obligations. Providers of general-purpose AI models that will be classified as general-purpose AI models with systemic risk must notify the AI Office without delay. In the first year after entry into application of these obligations, the AI Office will work closely with providers, in particular those who adhere to the General-Purpose AI Code of Practice, to help them comply with the rules. From 2 August 2026, the Commission’s enforcement powers enter into application.
Providers of general-purpose AI models already on the market before 2 August 2025 must comply with the relevant obligations under the AI Act by 2 August 2027.
Background
The AI Act’s obligations for providers of general-purpose AI models aim to ensure that general-purpose AI models are transparent, in line with EU and national copyright law, and that providers of the most advanced or most impactful general-purpose AI models assess and mitigate the systemic risks presented by these models. To help providers of general-purpose AI models comply with the AI Act, the Commission is making available a package of information, which includes the guidelines, the General-Purpose AI Code of Practice, and further tools to assist providers, such as the AI Act Service Desk.
The guidelines were developed through a public consultation, during which the Commission gathered input from hundreds of stakeholders. They also reflect input from the expert pool set up by the Commission’s Joint Research Centre to advise the AI Office on the classification of AI models, including general-purpose AI models and those with systemic risk, as well as feedback from other experts.
While not legally binding, these guidelines set out the Commission’s interpretation and application of the AI Act, which will guide its enforcement actions.
Find more information about the Guidelines for providers of general-purpose AI models: