Technology, Media & Telecommunications (TMT)

The EU’s Commitment to GPAI Providers: Building a Trustworthy and Transparent AI Ecosystem

01 Aug 2025

6 min read

Author: Erika Criscione

As the 2nd August deadline approaches for the entry into force of the provisions on general-purpose AI (GPAI) models, the European Commission has intensified its efforts to guide GPAI providers in meeting their regulatory obligations under the EU Artificial Intelligence Act (“AI Act” or “Act”).

To this end, on the 18 July, the Commission released its Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act. These Guidelines closely follow the publication of the final version of GPAI Code of Practice on 10 July.

In addition, the Commission has published a standardised GPAI training data summary template on 24th July.

Together, these initiatives aim to ease the transition into the AI Act’s compliance, providing clarity and consistency for GPAI developers while promoting a culture of accountability, trust and innovation in the European AI ecosystem.

To clearly understand who is subject to the these set of rules and obligations, it is worth mentioning what a GPAI provider is and what constitute a GPAI model under the Act.

GPAI providers[1] are entities (natural or legal persons, public authorities, agencies, or other bodies) that develop a general-purpose AI model or have one developed and then place it on the market or put it into service under their own name or trademark.

General Purpose AI Systems are defined under the Act as AI systems that are based

on general-purpose AI models (GPAI Models) which have the capability of serving a variety of purposes[2]. These include those AI models[3] which are capable of performing a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications.

The AI act explicitly excludes from this definition AI models that are used for research, development or prototyping activities before they are placed on the market.

Key Obligations Imposed on GPAI Providers under the AI Act

The AI Act[4] introduces a set of core obligations for providers of General-Purpose AI (GPAI) models, aimed at ensuring transparency, accountability and safe integration.

Pursuant to Article 53 of the Act, providers are required to prepare and maintain up-to-date technical documentation for their models. They must also make available relevant information and documentation to downstream developers (i.e. those integrating GPAI models into their own AI systems) so that these developers can meet their respective legal obligations under the Act.

Importantly, and without prejudice to intellectual property rights or the protection of confidential business information, GPAI providers must ensure that the documentation made available allows downstream providers to understand the capabilities and limitations of the GPAI model. This information must, at a minimum, include the elements listed in Annex XII of the AI Act.

They are also required to implement a copyright policy and publish a summary of training data content used to develop the model.

For GPAI models identified as posing systemic risk, additional obligations apply, including model evaluation, adversarial testing, incident tracking and reporting and Cybersecurity safeguards.

GPAI Code of Practice

The Code of Practice is a voluntary framework designed to support providers of General-Purpose AI (GPAI) models in aligning with the requirements of the EU AI Act, with particular emphasis on safety, transparency and copyright obligations.

It provides structured, practical guidance to support compliance with the AI Act. Although non-binding, adherence to the Code allows GPAI providers to demonstrate a proactive approach to regulatory alignment, offering enhanced legal certainty in meeting their obligations and potentially reducing associated administrative burdens.

The Code is structured into three chapters, with the first two – transparency and copyright – applicable to all GPAI providers:

The Transparency chapter includes a template form through which signatories can insert relevant information to demonstrate compliance with transparency requirements whilst the Copyright chapter offers a practical guidance for developing and implementing a copyright policy, as mandated by the AI Act.

The Chapters on Safety and Security outlines concrete state-of-the-art safety and security practices for managing systemic risks. This chapter is relevant to those providers of the most advanced GPAI models with systemic risk, under Article 55 AI Act.

Guidelines for Providers of General-Purpose AI Models

The Guidelines on the scope of obligations for providers GPAI models serve as a complementary instrument to the GPAI Code of Practice, offering practical clarity for stakeholders across the AI ecosystem. They are designed to help providers assess whether the AI Act’s obligations apply to them and to understand the nature of those obligations.

While they are non-binding, the Guidelines provide valuable insight into the likely approach that national regulators will take in interpreting and enforcing the provisions of the AI Act.

Key points of the Guidelines are the following:

  • Clarified definition of GPAI models: a model qualifies as a GPAI model if it has been trained using computational resources exceeding 10²³ floating point operations (“FLOPs”) and is capable of generating language, text-to-image, or text-to-video outputs. This threshold provides a clear benchmark for developers in determining whether their models fall within the scope of the AI Act’s obligations.
  • Identification of GPAI providers: the Guidelines elaborate on the definition of a “provider” and the concept of “placing on the market,” offering important clarifications on when an entity modifying or distributing a GPAI model assumes the role of a provider under the AI Act.
  • Exemptions for Open-Source models: to foster innovation and uphold transparency, the Guidelines also set out the specific conditions under which providers of open-source GPAI models may benefit from exemptions to certain obligations, ensuring a balanced regulatory approach.

GPAI Training Data Summary Template

As part of its broader effort to implemengt the AI Act, the European Commission has introduced a standardised Training Data Summary Template for providers of GPAI models. This template is designed to enhance transparency regarding the data used during the training phase, without compromising intellectual property rights or commercially sensitive information. It requires providers to disclose key details about the model, the developer, the scope and nature of the training data and the data collection methods employed.

Penalties

Under Article 101 AI Act, providers of GPAI Models which are not compliant with the AI Act may face  administrative fines of  up to 3 % of their annual total worldwide turnover in the preceding financial year or EUR 15 000 000, whichever is higher.

Key Compliance Deadlines for Providers of General-Purpose AI (GPAI) Models under the AI Act

  • From 2 August 2025, all providers placing GPAI models on the EU market must comply with the applicable obligations set out in the AI Act.
  • Providers whose models are classified as GPAI models with systemic risk are required to notify the AI Office without delay upon such classification.
  • Providers of GPAI models that were placed on the market before 2 August 2025 must bring their models into compliance with the relevant provisions of the AI Act by 2 August 2027.
  • The Commission’s enforcement powers with respect to GPAI-related obligations will become fully applicable as of 2 August 2026. From that date onward, the European Commission will be empowered to monitor and enforce compliance, including through the imposition of administrative fines for non-compliance.

[1] Article 3 (3) AI Act

[2] Article 3(66) AI Act

[3] Article 3(63) AI Act

[4] Art. 53 AI Act


Share