What is shaping up: With the adoption of the Council of the European Union (“Council”) and the European Parliament's (“Parliament”) position, the AI Omnibus enters a critical phase. Trilogue negotiations have started, and over the coming weeks, several technical negotiation rounds will work to narrow the remaining divergences between the institutions' respective positions.
A political agreement on a consolidated text is expected by the next political trilogue meeting on April 28, 2026. Should that timeline hold, endorsement by Parliament and the Council could follow in May and June, respectively; with potential publication in the Official Journal in July 2026, ahead of the August 2, 2026 deadline.
Convergence
Across the board, institutions are “converging” on several strategic points:
- Fixed high-risk timelines: Both institutions reject the EC's conditional mechanism and replace it with hard dates: December 2, 2027, for stand-alone high-risk AI systems (Annex III) and August 2, 2028, for AI embedded in regulated products (Annex I). This is the dominant negotiating position and will likely survive trilogue negotiations.
- New legal basis for processing of personal data for bias detection and correction: While both institutions support it, during trilogues, institutions will need to converge on the specific language on the guardrails foreseen for its use (Recital 6; Article 4a).
- Prohibition of “nudifier” generative AI systems: Both texts introduce new bans under Article 5 targeting AI systems capable of generating, manipulating or reproducing non-consensual intimate images (NCII) and child sexual abuse material (CSAM). The prohibition applies where the tool is designed for that purpose or where this misuse is reasonably foreseeable based on the system's functionalities and insufficient guardrails. While minor alignment on scope and exemptions may be needed, the prohibitions are highly likely to remain. (Article 5(1) (ha)).
- Registration of "exempted" high-risk systems: While the EC proposed to delete the current obligation for AI systems operating in Annex III contexts to be registered, even when a provider has validly assessed them as not high risk, both the Council and Parliament reinstate it. This is the approach that is likely to remain.
- Proportionality benefits for SMCs: All institutions support extending proportionality measures, including simplified documentation, proportional QMS and reduced penalties.
Divergence
Areas still open to negotiation include the following points:
- Literacy: There is a genuine split between a binding duty (Parliament) and soft encouragement (Council). The latter replaces the current obligation on providers and deployers with a purely nonbinding encouragement model. In contrast, the Parliament, while also requiring the EC to issue practical implementation guidance, retains a legal obligation on providers and deployers to “support the improvement of AI literacy” among their staff, though it clarifies that this does not guarantee any specific level of individual literacy.
- Governance and AI Office powers: All institutions agree on centralizing oversight for GPAI-based systems where the model and system come from the same provider. However, the Council adds important carve-outs for products covered by sectoral legislation, critical infrastructure, law enforcement, border management, financial supervision and judicial processes, plus a judicial authorization requirement for inspections, conditional upon national law requirements. The Parliament, in contrast, limits its carve-outs primarily to products governed by sector‑specific legislation and critical infrastructure, extends the scope to providers in the same “group of undertakings” (instead of the “same undertaking,” as the Council), and adds an explicit duty for the AI Office to coordinate with DPAs on matters involving personal data. Both texts empower the EC to adopt an implementing act defining the AI Office's enforcement powers, including the ability to impose fines.
- Final synthetic content timing: While the obligation itself is not in question, the Council proposes that it enters into force on February 2, 2027, while the Parliament grants a shorter extension to November 2, 2026 (a six-month postponement versus three).
- Sectoral AI safety integration: The Parliament has taken the significant step of moving all Annex I-A product categories into Annex I-B and horizontally integrating AI Act requirements into multiple sectoral laws (machinery, toys, radio equipment, medical devices, pressure equipment, PPE, gas appliances, cableways, and others). The Council proposal does not cover this topic. This makes the issue politically uncertain and likely to face sectoral resistance (e.g., the Standing Committee of European Doctors has already called for medical devices to continue under the scope of the AI Act).
- Cybersecurity alignment: The Parliament proposes that high-risk AI systems fulfilling the Cyber Resilience Act's essential cybersecurity requirements should be presumed compliant with Article 15 of the AI Act for overlapping elements. The Council did not include any equivalent provision.
- High-risk classification clarification: Finally, the Parliament introduces a new provision clarifying that AI features used purely for convenience, automation or optimization do not constitute safety functions unless their failure creates actual safety risks. This provision could narrow the scope of high-risk AI systems. The Council did not include any such equivalent provision.
- Notified body designation: There is a notable divergence on this topic. The Council supports the EC's single application and assessment procedure mechanism for conformity assessment bodies seeking designation under both the AI Act and Annex I-A sectoral legislation. In contrast, the Parliament has deleted it. Whether this provision survives will depend on the broader product safety architecture discussion.
Next steps for companies
The AI Omnibus introduces meaningful clarifications and targeted adjustments to the EU AI Act but does not alter the fundamental compliance architecture that organizations must prepare for. Although institutional alignment is progressing rapidly, it is important to emphasize that this remains a legislative proposal until it is formally adopted and published. For businesses, the safest course of action is unchanged: continue planning against the original deadline of August 2, 2026. Although the proposed fixed timelines for high‑risk systems (2027/2028) are likely to survive the trilogue phase, their practical benefit is limited. They will only take effect once the AI Omnibus is formally adopted, meaning companies should maintain their existing compliance plans.
In practice, the main advantage of the delay concerns enforcement of the provision on high-risk AI systems as a later applicability of such obligations would postpone the point in time at which competent authorities may begin enforcing them. We will continue monitoring the negotiations through the trilogue phase and provide further updates as soon as the consolidated text becomes available.
This alert follows our initial briefing of November 2025, available here.