Opinion

EU Artificial Intelligence Office publishes the final version of the General-Purpose AI Code of Practice

EU Artificial Intelligence Office publishes the final version of the GPAI Code of Practice
On July 10 2025, the EU Artificial Intelligence Office (the AI Office) issued the final version of the General Purpose AI Code of Practice (GPAI Code). The GPAI Code is a non-binding set of guidelines created by independent industry specialists through a collaborative process, aimed at helping providers of general-purpose AI models demonstrate compliance with their obligations under Articles 53 and 55 of the AI Act which are applicable from August 2 2025. The GPAI Code is divided into three different chapters: transparency, copyright and safety and security. While several leading technology companies have signed or expressed an intent to sign the GPAI Code, there are some notable exceptions to the list, underscoring that industry adoption may remain uneven and that further clarification and guidance from the AI Office and European Commission may be necessary to drive broader uptake. 

Transparency Chapter

The transparency chapter of the GPAI Code sets out a series of guidelines for how signatories to the GPAI Code should ensure they meet the transparency obligations required by the EU AI Act. This includes how signatories to the GPAI Code of GPAI models should develop documentation regarding how the GPAI model was developed and how it can be used. The transparency chapter includes the following key commitments:

  1. Model documentation: signatories to the GPAI Code are required to create and maintain detailed documentation for each general-purpose AI model they place on the market by completing a ‘Model Documentation Form’. This documentation covers a wide range of information, including: model architecture and design specifications, input and output modalities, model size and dependencies, distribution methods and licensing, intended and restricted uses and training process details, including methodologies and rationale The documentation must be kept up to date, with previous versions retained for at least ten years after the model’s release.
  2. Providing relevant information: signatories to the GPAI Code must make certain information available to downstream providers and, upon request, to the AI Office or national competent authorities. This ensures that those integrating the models into their own systems have the necessary understanding of the model’s capabilities and limitations, and that authorities can carry out their supervisory roles. Information for authorities is only shared when strictly necessary and must be handled with confidentiality, respecting intellectual property and trade secrets. Signatories to the GPAI Code are also encouraged to consider making some information public to enhance overall transparency.
  3. Quality, security and integrity: signatories must ensure the integrity, security and quality of the information that is provided or documented. 

Copyright Chapter

The copyright chapter of the GPAI Code focuses on how organisations can adopt a policy to adhere to the copyright related obligations under EU copyright and intellectual property law. While adherence to the copyright chapter can help signatories to the GPAI Code evidence compliance, it does not equate to compliance with EU copyright law, which remains subject to national and EU court interpretation. The copyright chapter includes the following commitments: 

  1. Copyright policy: signatories to the GPAI Code are required to produce and maintain a policy to comply with EU copyright law in respect of any GPAI model they place on the EU market. There is no requirement to publish it, but the GPAI Code encourages publication of a summary.
  2. Lawful data crawling: when collecting data for model training, signatories to the GPAI Code must only reproduce and extract content that is lawfully accessible. They are not permitted to circumvent technological protections such as paywalls and must avoid crawling websites known for copyright infringement. An EU-curated list of such sites will be made available to guide compliance.
  3. Honouring rights reservations: signatories to the GPAI Code must take steps to detect and respect any reservations of rights expressed by rightsholders in a machine-readable format. This includes using web crawlers that recognise ‘robots.txt’ files and other metadata protocols designed to signals opt-outs. Signatories to the GPAI Code are also encouraged to engage with rightsholders to support the development and adoption of such standards. They should also publish information on how their crawlers operate and must automatically notify affected rights holders when information is updated (e.g. by syndicating a web feed).
  4. Mitigating downstream risk: signatories to the GPAI Code must apply suitable technical safeguards and prohibit infringing uses in their terms of use or documentation. This obligation applies whether the model is used internally or licensed to others. 
  5. Complaint mechanism: signatories to the GPAI Code must designate a point of contact for rightsholders to establish a mechanism for receiving and handling complaints. Rightsholders and representatives should be able to submit electronically substantiated complaints regarding non-compliance with the code with reasonable response times assuming complaint not manifestly unfounded.

Safety and Security Chapter 

The safety and security chapter of the GPAI Code focuses on the obligations of organisations in relation to systemic risk and developing safety and security practices throughout the model lifecycle.

The security chapter includes 10 commitments including: 

  1. Safety and security framework: signatories to the GPAI Code must develop, implement and update a safety and security framework tailored to their model’s risk profile. The framework should describe processes for identifying, evaluating and mitigating systemic risks. The framework should be confirmed before market launch and be notified to the AI Office. The implementation of the framework involves regular risk assessments at defined trigger points and a full systematic risk analysis before release. Updates must be made annually or whenever material changes occur. 
  2. Risk identification, analysis and acceptance: systematic risks must be identified using methods that take into account prior incidents, market intelligence and emerging trends. Realistic scenarios must then be developed for how these risks might manifest. Each identified risk must be analysed in detail. This includes gathering model-independent information, performing evaluations, constructing risk models, and estimating the likelihood and impact of potential harms. Based on this analysis, signatories to the GPAI Code must determine whether each risk is acceptable using defined acceptance criteria or risk tiers. If a risk is deemed unacceptable, signatories to the GPAI Code must take corrective actions before proceeding with development or deployment. 
  3. Mitigations for safety and security: if systemic risks are identified, signatories to the GPAI Code must implement technical and operational mitigation measures to reduce them to acceptable levels. Such measures include filtering training data, limiting access, modifying behaviour and deploying tools to support safer downstream use. Security mitigations focus on protecting the model from unauthorised access or misuse, especially for highly capable models. Signatories to the GPAI Code must define a clear security goal based on expected threats and adopt appropriate cybersecurity practices. 
  4. Safety and security model reports: before placing a model on the market, signatories to the GPAI Code must submit a ‘Safety and Security Model Report’ to the AI Office. This report includes descriptions of the model, its systemic risks, mitigation measures, justification for deployment and external evaluations. The report must be updated if there are new or material changes and large providers must update their report every six months, unless specifically exempted. 
  5. Governance and risk responsibility: clear internal roles and responsibilities must be established across all levels of the organisation. Signatories to the GPAI Codes must allocate sufficient resources (people, funding, compute) to manage risk effectively.
  6. Serious incident reporting: signatories to the GPAI Code must track and report serious incidents (such as death, major cybersecurity breaches, or harm to public infrastructure) to the relevant authorities. Reports must include timelines, root causes and corrective actions. Updates on the incident must be provided regularly until the incident is resolved, with a final report submitted within 60 days. 

The GPAI code will be reviewed by EU Member States and the European Commission and will be further supported by additional Commission guidance on key aspects of general-purpose AI. 

The press release is available here, the transparency chapter here, the copyright chapter here and the safety and security chapter here.

Related capabilities