Opinion
EU AI Act: Key changes in the recently leaked text
A recent leak reveals the final agreement that the European Council and the Parliament reached on the AI Act in December 2023. The AI Act will become the first comprehensive set of rules worldwide concerning the use of Artificial Intelligence (“AI”). It essentially applies to anyone who supplies or uses the technology in the EU, regardless of where they are based. We set out below some of the key changes in the leaked text:
1. New criteria for high-risk AI systems
The leaked text introduces new criteria to clarify when the high-risk AI systems in Annex III of the AI Act are exempt from the high-risk classification.
The AI Act follows a risk-based approach, which means that it imposes the most obligations and restrictions on high-risk AI systems. These include the AI systems listed in Annex III, such as those used for recruitment or promotion purposes, or those used for evaluating individuals' creditworthiness (except for fraud detection).
The new criteria acknowledge that there may be cases in which the AI systems listed in Annex III do not entail a high risk and can thus be exempt from the rules on high-risk AI systems. For example, the exemption applies if the AI system is only intended to perform a narrow procedural task or to improve the result of a previously completed human activity. However, AI systems listed in Annex III are always considered high-risk if the AI system performs profiling of natural persons (within the meaning of the GDPR).
Providers of AI systems who consider that their AI system is not high-risk on the basis of the exemption criteria must document their assessment before that system is placed on the market or put into service and must provide this documentation to national competent authorities upon request. Such provider must register the system in a dedicated EU database.
2. Dedicated rules for general purpose AI models
The leaked text also reveals a new dedicated set of rules and penalties for providers of general purpose AI models (GPAI models). These AI models refer to a level of AI that is not constrained to a specific domain but can perform a wide range of distinct tasks and can be integrated into a variety of downstream systems or applications.
For example, providers of GPAI models will have to keep and provide documentation on the functioning of model, enabling stakeholders downstream to have a “good understanding of the model, its capabilities and limitations”, and put in place a policy to respect Union copyright law in particular to identify and respect, including through state-of-the-art technologies, the reservations of rights.
The leaked text also imposes additional obligations on providers of GPAI models with systemic risk. These are GPAI models that have “high impact capabilities” (i.e. the system has a computing power of more than 10^25 floating points of operation) or are designated as having a systemic risk by the Commission. The Commission will publish and maintain a list of such GPAI models.
Moving forward, providers of GPAI models will be subject to potential fines of up to 3% of the total annual turnover or EUR 15M, whichever is higher. The EU Commission will be solely competent for the supervision and enforcement in respect of providers of GPAI models and may delegate its power to the soon-to-be-established AI Office.
3. Exemptions for open-source GPAI models
Under the leaked text, the providers of GPAI models that are released under a free and open-source license, and whose parameters and related information, are made publicly available are exempt from certain transparency-related requirements imposed on general purpose AI models, such as the obligation to keep and provide documentation on the functioning of the model.
This exemption does not apply to open-source GPAI models that are considered to present a systemic risk. Open-source GPAI models also remain subject to the obligation to produce a summary about the content used for model training and the obligation to put in place a policy to respect Union copyright law.
4. Changes to the deadlines for compliance with the AI act
The leaked text brought some changes to the different deadlines for when the rules under the AI Act will start to apply. The main deadline is 24 months after the AI Act enters into force, but there are some exceptions. Here are the main ones:
- Within 6 months, the prohibitions on certain AI practices, such as social scoring for certain purposes, will apply.
- Within 12 months, the rules on GPAI models will apply. Providers of GPAI models that were already on the market before the entering into application of the AI Act will have 12 months more to comply.
- Within 36 months, the obligations relating to AI systems that are intended to be used as a safety component of a product, or that are themselves a product, covered by the specific Union harmonization legislation listed in Annex II of the AI Act, and that are considered high-risk will apply.
- High-risk AI systems that were already on the market or in use before the entering into force of the AI Act have to comply only if they undergo significant changes in their designs. However, those used by public authorities will have 4 years to comply.
The AI Act will enter into force on the twentieth day following that of its publication, but the publication data remains unknown to this date.
For further information on AI in general and specific topics please visit our AI insights page.
Contact one of our company-wide group of asserted AI experts.
This content was originally published by Allen & Overy before the A&O Shearman merger
Related capabilities