Article

Law No. 132 of 23 September 2025: Italy’s leadership in national AI regulation

Law No. 132 of 23 September 2025: Italy’s leadership in national AI regulation
With the approval of Law No. 132 of September 23, 2025 (the “Law”), Italy becomes the first European Union member state to have adopted a comprehensive and specific law to comply with the provisions of EU Regulation No. 2024/1689 (“AI Act”), anticipating the full entry into force of that regulation and establishing a national framework for governance, supervision, and support for innovation in the field of artificial intelligence (AI).

Background

Draft Law 1146/24 was the subject of various opinions from sector authorities, including the Italian Data Protection Authority (the “Garante”) and the European Commission. The latter, in its opinion of September 12, 2024, highlighted the need for greater consistency with the AI Act and for greater openness towards the use of artificial intelligence. The text, amended in part according to the recommendations received, was approved by the Italian Parliament on September 17, 2025, and published in the Official Gazette no. 223 of September 25, 2025.

The sectors concerned

The first section of the Law, dedicated to general principles, introduces a national strategy on AI, to be updated every two years by the Interministerial Committee for Digital Transition with the support of the Department for Digital Transformation of the Presidency of the Council of Ministers. This strategy will serve as a reference for policy and regulatory decisions on AI.

The second section, on the other hand, contains rules for individual sectors, indicating areas and methods of AI use. Specifically, in healthcare and scientific research, AI is permitted as a support tool, but cannot be used to discriminate or decide access to treatment; the human role remains central, with humans remaining responsible for final decisions. Public and private non-profit research is classified as being of significant public interest, allowing the processing of personal data without consent, subject to approval by ethics committees and notification to the Garante.

In addition, the second section focuses on the labor and justice sectors. The Italian system emphasizes the accountability of “deployers,”, extending organizational controls even beyond the scope of high risk and linking the new obligations with those already known in the field of privacy (DPIA, privacy by design, data governance).

Employment

In the labor sector, the Law introduces specific safeguards for the management of worker selection, evaluation, and monitoring processes using AI systems. It establishes transparency obligations for employers, the right to information for employees, and the need for impact assessments to prevent algorithmic discrimination.

The Law also establishes an observatory on the impact of AI on work, with the aim of maximizing the benefits and mitigating the risks arising from the use of artificial intelligence systems in the workplace and promoting training for workers and employers in the field of artificial intelligence.

Justice

In the justice sector, the legislation establishes strict criteria for the use of AI systems in the judicial sphere, both for case management and decision support. Human supervision is strengthened and the use of AI for interpretative activities is prohibited: the use of automatic analysis tools is limited to organizational, work simplification, and administrative purposes only, thus ensuring full protection of the right of defense and confidentiality of the parties.

IP

The Law specifies that copyright protection is also recognised for works  “created with the aid of artificial intelligence,” provided that they are the result of the author's intellectual work. It is therefore clarified that material generated by AI only is not subject to protection. Reproduction and extraction of text and data using AI is permitted if the sources are legitimately accessible.

Governance

In terms of governance, the national strategy for AI is entrusted to the Presidency of the Council of Ministers, with the involvement of the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN) as national authorities, and the sectoral supervisory authorities (Bank of Italy, CONSOB, IVASS), within their respective areas of competence. Particular attention is paid to cybersecurity, which is considered an essential prerequisite throughout the life cycle of AI systems and models.

At the institutional level, coordination between national authorities (AgID/ACN, sectoral authorities) and the Data Protection Authority will be decisive in aligning the risk assessments of AI systems, including from the point of view of the GDPR and ethical impact assessments.

Interconnections with the AI Act, NIS2, and GDPR

The Law decisively covers the areas left out of the AI Act: it identifies supervisory authorities and powers, regulates inspections, supports SMEs and public administrations, and defines sanctions for conduct that is not fully harmonized (e.g., deepfakes). Even within the constraints of European harmonization, the national legislator raises the bar on organizational safeguards and "procedural" requirements (transparency, controls, training, documentation), extending them to low-risk cases and sensitive sectors such as labor, health, and justice.

On the generative AI and deepfake front, Italy is taking a more prescriptive approach: it is introducing criminal offenses and mechanisms for content traceability and authenticity, while the AI Act favors information obligations and codes of conduct (with enhanced requirements for systems posing systemic risk). The result is a model integrated with the GDPR, NIS2, and sector rules, which aims to translate general European clauses into verifiable operational controls.

Conclusions

In concrete terms, compliance will depend on the ability to orchestrate governance and control tests. Among the priorities for companies, we can certainly identify the need to:

  • map systems and classify risks;
  • integrate DPIAs;
  • define the roles and responsibilities of developers and users;
  • include "AI Act-ready" contractual clauses in the supply chain; 
  • implement technical measures, including for content mapping and incident and reporting management.

Finally, active monitoring of European executive acts and national guidelines (including those of the Italian Data Protection Authority) is necessary, because these standards will give rise to evaluation criteria, technical parameters, and inspection priorities. Those who act now will not only reduce the risk of sanctions but also gain a competitive advantage: transforming compliance into a requirement for quality, security, and reliability of AI systems.

Related capabilities