Article

New York enacts Responsible AI Safety and Education act: new transparency, safety, and oversight requirements for frontier model developers

New York enacts Responsible AI Safety and Education act: new transparency, safety, and oversight requirements for frontier model developers

On December 19, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education ("RAISE") Act, which is scheduled to take effect in January 2027, and amends the General Business Law to establish a comprehensive framework for the oversight of advanced artificial intelligence ("AI") models, known as “frontier models.” The Governor is reported to have signed a version of the bill as passed by lawmakers this past spring, but to obtain passage, lawmakers reportedly agreed to approve a number of changes requested by the Governor when they return to Albany in January. We will monitor activity and update this post as necessary once the final text is issued. Our summary here is based on the text of the bill (S6953B/A6453B) as signed and the Governor’s announcement following the signing.

The RAISE Act imposes new obligations on large AI developers, mandates safety and security protocols, requires incident disclosure, and authorizes enforcement actions to address the risks posed by highly capable AI systems.

Governor Hochul and legislative leaders used California’s recently enacted Transparency in Frontier Artificial Intelligence Act (“TFAIA) as the framework for New York’s RAISE Act. We discussed the TFAIA in a prior post. At a high level, the RAISE Act adopts the TFAIA‑style compute thresholds, pre‑deployment safety protocols, and Attorney General enforcement, while diverging with respect to certain definitions of harm, disclosure mechanics, and penalty structures that we summarize below.

This client alert outlines the RAISE Act’s scope, core obligations, and enforcement regime, and highlights where New York aligns with, and departs from, California’s TFAIA.

Scope and Applicability

The RAISE Act applies to large developers of “frontier models,” which are defined as AI models trained using more than 10^26 computational operations with a compute cost exceeding $100 million, or AI models produced by applying knowledge distillation to such frontier models (i.e., the use of a larger model, or the output therefrom, to train a smaller model with similar capabilities as the larger one) with a compute cost exceeding $5 million. The law defines a “large developer” as an entity that has trained at least one frontier model and spent over $100 million in aggregate compute costs to train frontier models (but excludes accredited colleges and universities engaged in academic research). Further, the RAISE Act only applies to frontier models developed, deployed, or operating in whole or in part within New York State.

Unlike the RAISE Act, the TFAIA defines “large” frontier models by annual revenues exceeding $500 million and does not add compute‑cost triggers or explicit coverage for distilled models. By contrast, RAISE ties coverage to aggregate compute spend and exempts accredited academic research. The TFAIA’s applicability is triggered by developers that train or initiate training of frontier models in California, whereas RAISE applies to frontier models developed, deployed, or operating in whole or in part in New York State. The New York law, therefore, has the potential for a broader geographic reach based on where an AI model is deployed.

Key Requirements

The New York legislation prohibits large developers from deploying a frontier model “if doing so would create an unreasonable risk of critical harm”. While “critical harm” is defined in the statute, what constitutes an “unreasonable risk” leaves open questions for subsequent regulation or litigation.

The RAISE Act, however, establishes a robust framework for large developers of frontier models, focusing on transparency, risk mitigation, and public accountability. The following obligations are created by the RAISE Act:

Safety and Security Protocols

Prior to deploying a frontier model, large developers must implement a written safety and security protocol detailing technical and organizational measures to reduce the risk of “critical harm,” defined as incidents resulting in the death or serious injury of 100 or more people, or at least $1 billion in property damage, including those arising from the creation or use of chemical, biological, radiological, or nuclear weapons, or autonomous criminal conduct by AI systems. The protocol must address:

  • protections and procedures to mitigate critical harm risks;
  • cybersecurity measures to prevent unauthorized access or misuse of frontier models;
  • detailed testing procedures to evaluate risk of critical harm, including potential misuse, modification, or loss of control; and
  • compliance mechanisms and designation of senior personnel responsible for oversight.

Developers need to keep a copy of the protocol for as long as the frontier model is deployed plus five years. A copy should also be conspicuously published and be sent to the Attorney General and Homeland Security. Redactions may be made to such copies to protect public safety, trade secrets, or employee or customer privacy, or to prevent the release of confidential information or information otherwise controlled by state or federal law.

The TFAIA similarly requires large frontier developers to publish a written “frontier AI framework” addressing, among other things, catastrophic‑risk mitigations, and cybersecurity practices, with revisions to account for material modifications posted within 30 days. By contrast, the TFAIA has a lower “catastrophic risk” threshold (50+ casualties).

Annual Testing

Under the RAISE Act, large developers are required to record and retain information on specific tests and results used to assess frontier models, ensuring sufficient detail for third-party replication. Developers are also required to perform annual reviews of safety and security protocols to account for changes in model capabilities and industry best practices, with material modifications published in accordance with the above transparency requirements.

The TFAIA likewise mandates that large frontier developers include assessments of catastrophic risks and the results of those assessments in their frontier AI frameworks, and to update such frameworks annually.

Incident Disclosure

The RAISE Act requires large developers to disclose “safety incidents”, defined as known or suspected occurrences of critical harm or increased risk thereof, including autonomous model behavior, unauthorized access or release of model weights, control failures, or unauthorized use, to the Attorney General and Division of Homeland Security and Emergency Services within 72 hours of discovery or within 72 hours of learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Disclosures must include the incident date, qualifying reasons, and a plain statement describing the event.

California’s TFAIA similarly mandates prompt reporting on the developer’s website of “critical safety incidents,” including loss of control and harmful model‑weight compromises. However, the reporting requirements are different as California requires disclosure to the California Governor’s Office of Emergency Services (“OES”) within 15 days of discovering the critical safety incident (or 24 hours if there is an imminent risk).

Enforcement and Penalties

The Attorney General is authorized to bring civil actions for violations of the RAISE Act, with penalties up to $10 million for a first violation and up to $30 million for subsequent violations, as well as injunctive or declaratory relief. According to the announcement from the Governor’s office and parallel reporting, New York plans to establish an oversight function and set penalties up to $1 million for the first violation and up to $3 million for subsequent violations, which is less than the limits currently stated in the bill’s text. The RAISE Act does not create a private right of action and does not limit the application of other laws or remedies.

California’s TFAIA likewise provides exclusive civil enforcement by the Attorney General and imposes penalties for noncompliance, but caps penalties at up to $1 million per violation. The TFAIA also expressly preempts post‑2024 local regulation of frontier‑developer catastrophic‑risk management, whereas the RAISE Act authorizes higher penalties and injunctive or declaratory relief and does not include any express statewide preemption.

Looking Ahead

On December 11, 2025, the White House issued an Executive Order that criticizes the growing patchwork of state‑by‑state AI rules as a barrier to innovation and interstate commerce. While the RAISE Act, like California’s TFAIA, advances comprehensive transparency and safety obligations for frontier developers, it could become a target of the EO’s preemption strategy through litigation, agency action, and legislative proposals. Taken together, the EO, the TFAIA, and the RAISE Act signal a narrowing path for what might be considered permissible state AI regulation in the near term. The similarities between the RAISE Act and the TFAIA could be a signal to the federal government on an approach for potential federal legislation harmonizing the approach of the states.

The RAISE Act represents a significant advancement in New York’s approach to AI governance, emphasizing transparency, safety, and accountability. Large developers operating in New York should assess their obligations under the RAISE Act, implement comprehensive safety and security protocols, establish procedures for incident reporting, and ensure ongoing compliance with annual reviews and transparency requirements.

Related capabilities

subscribe

Interested in this content?

Sign up to receive alerts from the A&O Shearman on technology blog.