Article

Zooming in on AI: California’s Evolving AI Legal Landscape Entering 2026

Zooming in on AI: California’s Evolving AI Legal Landscape Entering 2026
As we start 2026, Daren Orzechowski, Alex Touma, Bertrand Nzabandora, and Quinn Hendricks discuss recent and pending California legislation and regulatory developments relating to artificial intelligence.

Introduction

2025 was a year of rapid technological advancements with respect to artificial intelligence (“AI”). There were also significant legal developments, with a few states moving to enact AI-focused legislation. California emerged as a leader. Below we summarize the state of AI legislation in California and highlight AI-related laws that went into effect as of January 1, 2026.

In addition to state-level legislation, there have been changes at the federal level with respect to regulating AI. On December 11, 2025, the White House issued the Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence” (the “EO”), which signaled a federal approach to preempt state laws regarding AI. While the EO itself does not create binding laws that preempt state laws, it does mark a significant escalation in federal efforts to create a national framework for AI.

Currently, AI is regulated through a variety of federal, state, and local laws, consisting of both AI-specific laws as well as existing legal frameworks that courts and agencies are applying to AI, such as those regarding consumer protection, civil rights, intellectual property, data privacy, and antitrust. Many states have passed AI-specific laws addressing topics such as deepfakes, digital replicas, employment-related automated decision systems, transparency, and consumer protection. California, home to many AI developers and deployers, has enacted a comprehensive suite of AI laws focusing on these issues as well as laws directly targeting AI developers, ensuring that such developers operate in a manner providing transparency, accountability, and safety.

The effect of the EO on these state and local laws is yet to be determined. In the interim, companies should continue to monitor and comply with these laws. Our team is monitoring developments in this space closely.

Below we summarize some of California’s latest AI-related laws and some of the AI-related bills vetoed by California Governor Newsom.

Newly Enacted Laws in California

Transparency

SB 53: Transparency in Frontier Artificial Intelligence Act

On September 29, 2025, Governor Newsom signed into law, Senate Bill 53, entitled: Transparency in Frontier Artificial Intelligence Act (“SB 53”). SB 53 took effect on January 1, 2026.

Among other things, SB 53 requires that large frontier developers (i.e., frontier developers with annual gross revenues (together with affiliates) exceeding $500 million in the previous year) publish and maintain a frontier AI framework describing their approach to managing, assessing, and mitigating catastrophic risks. Such risks are defined as a foreseeable and material risk that the use of the developer’s frontier model will materially contribute to the death or serious injury of more than 50 people or to over $1 billion in property damage in a single incident, through actions such as providing expert assistance in the creation or release of weapons, carrying out cyberattacks or certain crimes without meaningful human oversight, or evading the frontier developer’s or user’s control.

SB 53 is available here. For a more detailed discussion, see our client alert here

AB 853: California AI Transparency Act

On October 13, 2025, Governor Newsom signed into law, Assembly Bill 853, entitled: the California AI Transparency Act (“AB 853”). AB 853 expands upon, and delays the effective date of the requirements under, Senate Bill 942, also called the California AI Transparency Act (“SB 942”), signed into law in 2024.

SB 942 imposes requirements on AI developers of AI systems that are accessible within California. Such systems are defined as AI systems that can generate derived synthetic content, including text, images, video, and audio, that emulates the structure and characteristics of the system’s training data. Developers of these AI systems with over 1,000,000 monthly visitors or users are required to provide tools to detect AI-generated content and to comply with transparency and contractual requirements regarding AI-generated content. The law was to take effect January 1, 2026.

AB 853 delays the effective date of SB 942 until August 2, 2026, and introduces new transparency and disclosure requirements, including those for (i) “generative AI hosting platforms,” defined as an internet website or application that makes available for download the source code or model weights of a generative AI system by a resident of California; (ii) “large online platforms,” defined as public-facing social media, file-sharing, mass messaging platforms, or stand-alone search engines with over 2,000,000 unique monthly users during the preceding 12 months; and (iii) “capture device manufacturers,” defined as a person who produces a device that can record photographs, audio, or video content in California.

Starting on January 1, 2027, the requirements of SB 942 not only apply to AI developers but also apply to generative AI hosting platforms. On such date, large online platforms must also detect whether data regarding the origin and authenticity of content distributed on its platform is compliant with widely adopted specifications set by established standards bodies, such as the International Organization for Standardization (“ISO”) and the International Electrotechnical Commission (“IEC”).

In an effort to help consumers and platforms distinguish between human generated and synthetic content, starting on January 1, 2028, manufactures of capture devices first produced for sale in California on or after that date must provide users with an option to include a latent disclosure in captured content that states: (i) the capture device manufacturer’s name; (ii) the name and version number of the capture device that created or altered the content; and (iii) the time and date of the content’s creation or alteration.

Non-compliance with AB 853 may result in a penalty of $5,000 per violation, with each day of non-compliance considered a separate violation.

AB 853 is available here.

Healthcare

AB 489: Health Care Professions; Deceptive Terms or Letters; Artificial Intelligence

On October 11, 2025, Governor Newsom signed into law, Assembly Bill 489, entitled: Health care professions: deceptive terms or letters: artificial intelligence (“AB 489”). AB 489 took effect on January 1, 2026.

AB 489 aims to prevent deceptive representations by AI systems and related technologies by prohibiting such technology from using specified terms, letters, or phrases that imply that a user is receiving care from a licensed health care professional when no such human oversight exists. AB 489 also prohibits the use of such terms, letters, and phrases in the advertising or functionality of AI systems. AI system developers and deployers are liable for systems that violate AB 489. The appropriate health care licensing board or enforcement agency may seek injunctions or restraining orders and use other available remedies. Each use of a prohibited term, letter, or phrase will be treated as a separate violation.

AB 489 is available here.

Data Brokers

SB 361: Data Brokers; Data Collection and Deletion

On October 8, 2025, Governor Newsom signed into law, Senate Bill 361, entitled: Data brokers: data collection and deletion (“SB 361”). SB 361 took effect on January 1, 2026.

SB 361 requires data brokers (i.e., businesses that knowingly collect and sell consumers’ personal information to third parties without having a direct relationship with those consumers) to register annually with the California Privacy Protection Agency and disclose information about the types of personal information they collect and share. Such personal information includes names, dates of birth, government-issued identification numbers, biometric data, citizenship and immigration status, union membership, sexual orientation, gender identity, user geolocation, and reproductive health care data. Data brokers must also disclose whether, in the past year, they have sold or shared consumers’ data with foreign entities, federal and state governments, law enforcement (unless pursuant to subpoena or court order), or developers of generative AI systems. Further, starting on January 1, 2028, data brokers must undergo an independent third-party audit every three years to determine compliance with SB 361 and submit audit reports within five days of a written request from the California Privacy Protection Agency.

SB 361 is available here.

Prohibitions on Defense

AB 316: Artificial Intelligence; Defenses

On October 13, 2025, Governor Newsom signed into law, Assembly Bill 316, entitled: Artificial Intelligence: Defenses (“AB 316”). AB 316 took effect on January 1, 2026.

AB 316 provides that in an action against a defendant who is alleged to have caused harm through the development, modification, or use of AI, the defendant may not assert as a defense that the AI acted autonomously. Defendants, however, may still use other affirmative defenses (including evidence relevant to causation or foreseeability) and present evidence relevant to the reasonable fault of any other party.

AB 316 is available here.

Consumer Protection

AB 325: Cartwright Act; Violations

On October 10, 2025, Governor Newsom signed into law, Assembly Bill 325, entitled: Cartwright Act: violations (“AB 325”). AB 325 took effect on January 1, 2026.

AB 325 makes it unlawful for any person to use or distribute a “common pricing algorithm” as part of a contract, combination, or conspiracy to restrict the availability of a product or service. A “common pricing algorithm” is defined as “any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term.” AB 325 also prohibits coercing another party to adopt a price or commercial term recommended by such algorithm.

Because AB 325 adds new prohibitions to the Cartwright Act, violations are subject to existing remedies and sanctions. Criminal violations are prosecutable under the Cartwright Act. Civil actions may seek injunctive relief and treble damages, along with costs and attorneys’ fees when authorized for prevailing plaintiffs. AB 325 does not create new penalty amounts and does not change or limit the applicability of other antitrust laws.

AB 325 is available here.

SB 243: Companion Chatbots

On October 13, 2025, Governor Newsom signed into law, Senate Bill 243, entitled: Companion chatbots (“SB 243”). SB 243 took effect on January 1, 2026.

A “companion chatbot” is defined as “an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and can meet a user’s social needs.” SB 243 requires operators of companion chatbot platforms to (i) provide clear and conspicuous disclosures when a reasonable person could be misled into believing that they are interacting with a human; (ii) disclose that companion chatbots may not be suitable for some minors; (iii) implement and publish protocols to prevent production of suicidal ideation, suicide, or self‑harm content and to provide crisis referral notifications; and (iv) with respect to users who the operator knows are minors (a) disclose that the user is interacting with AI, (b) provide a periodic reminder, at least every three hours, that the user is interacting with AI and for the user to take a break, and (c) institute reasonable measures to prevent the chatbot from producing sexually explicit content or solicitations.

Starting on July 1, 2027, operators must also comply with annual reporting obligations to the Office of Suicide Prevention.

The law authorizes private civil actions for injury, including injunctive relief and damages (the greater of actual damages or $1,000 per violation), and reasonable attorneys’ fees and costs.

Note that on October 13, 2025, Governor Newsom vetoed a similar bill, Assembly Bill 1064, entitled: the Leading Ethical AI Development Act (“AB 1064”). In vetoing AB 1064, Governor Newsom stated that its restrictions risked effectively banning minors’ use of chatbots, and he favored an approach that safeguards youth while allowing them to learn to interact with AI safely. He further explained that SB 243 advances several of AB 1064’s core objectives, such as disclosures and protections against self‑harm content, without imposing sweeping prohibitions.

SB 243 is available here.

AB 621: Deepfake pornography

On October 13, 2025, Governor Newsom signed into law Assembly Bill 621, entitled: Deepfake pornography (“AB 621”). AB 621 took effect on January 1, 2026.

AB 621 amends Section 1708.86 of the California Civil Code to strengthen protections for individuals depicted in digitized sexually explicit material. AB 621 defines “digitized sexually explicit material” as any portion of a visual or audiovisual work, including an image, that is created or substantially altered through digitization and depicts an individual nude or appearing to engage in, or be subjected to, sexual conduct. A depicted individual is provided with a cause of action if a person: (i) creates and intentionally discloses digitized sexually explicit material portraying the depicted individual, and the person knows, or reasonably should know, that the depicted individual in that material did not consent to its creation or disclosure or was a minor when the material was created; (ii) intentionally discloses digitized sexually explicit material portraying the depicted individual that the person did not create, and the person knows, or reasonably should know, that the depicted individual in that material did not consent to the creation of the digitized sexually explicit material or was a minor when the material was created; or (iii) knowingly facilitates or recklessly aids or abets conduct in (i) or (ii). AB 621 presumes that service providers that enable the ongoing operation of a deepfake pornography service knowingly facilitate or recklessly aid or abet the conduct described in (i) or (ii) above if they receive specified notice and fail to take all necessary steps to stop providing services that enable the ongoing operation of a deepfake pornography service within 30 days, subject to limited extensions for law enforcement permitting more time to take such steps.

AB 621 provides a successful plaintiff with a number of remedies, including the ability to recover between $1,500 and $50,000 per action or up to $250,000 for unlawful acts committed with “malice” (defined as an intent to cause harm to the depicted individual or to engage in despicable conduct with a willful and knowing disregard of the rights of the depicted individual).

AB 621 further authorizes public prosecutors to bring civil actions to enforce AB 621 and without the need to prove that a depicted individual suffered actual harm. A prevailing public prosecutor is entitled to a number of remedies, including civil penalties of $25,000 per violation or $50,000 per malicious violation, including attorneys’ fees. AB 621 provides exemptions for disclosures made in law enforcement, legal proceedings, or matters of legitimate public concern or newsworthy value, clarifies that disclaimers are not a defense, and preserves other available remedies.

AB 621 is available here.

Vetoed Legislation

While the following were not passed into law, these bills can provide guidance as to the thoughts of the legislators in California and Governor Newsom.

Social Media Platforms

SB 771: Personal Rights; Liability; Social Media Platforms

On September 11, 2025, the California State Assembly and Senate (collectively, the “California State Legislature”) passed Senate Bill 771, entitled the “Personal rights: liability: social media platforms” (“SB 771”), which was vetoed by Governor Newsom on October 13, 2025. The Governor vetoed SB 771 on the grounds that there are existing civil rights laws sufficient to address algorithms that relay content that violates California civil rights laws and, to the extent they are inadequate, such laws should be amended.

Starting in 2027, SB 771 would have amended the Civil Code to hold large social media platforms (i.e., those that generate more than $100 million per year in gross revenue) liable if their algorithms contributed to violations of California civil rights laws. The bill would have clarified that algorithmic actions might have been treated as independent acts of the platform, and that platforms would have been presumed to have actual knowledge of their algorithms’ operations, easing the burden of proof for plaintiffs.

Violations could have resulted in civil penalties of:

  • up to $1 million for an intentional, knowing, or willful violation;
  • up to $500,000 for a reckless violation; and
  • up to twice the aforementioned penalties, if the platform knew or should have known the claimant was a minor.

SB 771 now lies in the California Senate, which may attempt to address these issues in a new or amended bill or override the veto.

SB 771 is available here and its progress can be tracked here.

Automated Decision Systems in Employment

SB 7: Employment; Automated Decision Systems

On September 12, 2025, the California State Legislature passed Senate Bill 7, entitled “Employment: automated decision systems” (“SB 7”), which was vetoed by Governor Newsom on October 13, 2025. The Governor vetoed SB 7 on the grounds it imposed broad, unfocused notification requirements that did not directly target employer misuse of automated decision systems (“ADS”), and included overly broad restrictions that could hinder valuable employment practices (e.g., using customer ratings as primary input). He also wanted to first assess the impact of the then forthcoming California Privacy Protection Agency, which would help employers and independent contractors better understand how their personal data is being used by ADS.

SB 7 would have established new requirements for the use of ADS in employment-related decisions and set forth subsequent enforcement mechanisms. An ADS is defined as any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues simplified output (such as a recommendation or score), that is then used to assist or replace human decision-making and that materially impacts natural persons. This is similar to New York City’s Local Law 144 of 2021, which also regulates automated employment decision tools by requiring a bias audit by an independent auditor and public notice prior to use.

SB 7 would have required employers to provide written notice to workers who would have been affected by an ADS used for employment‑related decisions. Employers would have had to provide notice at least 30 days before first deploying an ADS and, for new workers, within 30 days of hiring them. If an employer had already used an ADS when the law took effect, workers would have had to receive notice no later than April 1, 2026. Employers would also have had to maintain an updated list of all ADS in use. Notices would have had to be provided in plain, ordinary language (in the language used for routine communications) and via a simple, easy‑to‑use method, such as email. Separately, an employer would have had to notify a job applicant upon receiving the application that an ADS would be used in making hiring decisions for that position. This notice could have been provided via an automatic reply mechanism or in the job posting.

While an employer would have been permitted to use ADS in employment-related decisions, under the proposed legislation, they would not have been allowed to collect workers’ data via an ADS for any purpose not disclosed in an ADS notice. Further, the legislation stated that employers might not have relied solely on an ADS to make a “discipline, termination, or deactivation decision”. If an employer had relied on ADS output for such a decision, a human reviewer would have had to review the ADS output and compile other relevant information. SB 7 provided guidance on the information to be collected, which could have included performance evaluations, personnel files, and witness interviews (for example, customer reviews). If an employer had primarily relied on an ADS to make a discipline, termination, or deactivation decision, the employer would have had to provide the affected worker (at the time of informing the worker of the decision to terminate) with a written notice in plain language and via a simple method. The notice would have had to state:

  • a human contact for more information and for requesting the worker’s data relied upon;
  • that an ADS assisted the decision made by the employer;
  • that the worker has the right to request a copy of the worker’s data used by the ADS; and
  • that the employer is prohibited from retaliating against the worker for exercising these rights.

SB 7 stated that workers can demand from an employer a copy of the most recent 12 months’ data used by an ADS in relation to a discipline, termination, or deactivation decision.

SB 7 now lies in the California Senate for consideration of the Governor’s veto. This use of AI in employment related decisions remains a hot topic in regulatory discussions.

SB 7 is available here and its progress can be tracked here.

Digital Replicas

SB 11: Artificial Intelligence Technology

On September 13, 2025, the California State Legislature passed Senate Bill 11, entitled Artificial intelligence technology (“SB 11”), which was vetoed by Governor Newsom on October 13, 2025. The Governor vetoed SB 11 on the grounds that the bill’s requirement for a hyperlink disclosure warning users of potential civil or criminal liability for unauthorized digital replicas was not sufficient to deter wrongdoers from using AI to impersonate others without their consent.

SB 11 would have introduced new requirements and definitions relating to AI and digital replicas in California. Under SB 11, a “digital replica,” would have had the same meaning as set forth in Section 3344.1 of the California Civil Code, which is “a computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered.” SB 11 provided that a false impersonation included the use of a digital replica with intent to impersonate another person for fraudulent or unlawful purposes, and that the unauthorized use of a person’s name, voice, signature, photograph, or likeness, including through a digital replica, was prohibited.

SB 11 required that, by December 1, 2026, any provider making AI technology available to consumers for creating digital replicas would need to display a clear warning. Such warning must have stated that unlawful use of the technology to depict another person without their consent could result in civil or criminal liability and must have been hyperlinked on any page where a user could input prompts and in applicable terms and conditions. Failure to display the required warning could have resulted in penalties of up to $10,000 per day for the failure to comply.

SB 11 now lies in the California Senate for consideration of Governor Newsom’s veto. SB 11 is available here and its progress can be tracked here.

For a global overview of the legal framework for digital replicas and deepfakes, including California’s policy developments, see our previous related article here.

Concluding Thoughts

2026 will be a year of rapid technical advancement and adoption for AI globally. Governments and regulators around the world will attempt to keep up with and regulate the development, deployment, and use of AI. California, home to some of the most innovative AI companies in the world, will continue to lead these efforts and, as signaled by Governor Newsom, try to strike the right balance between regulating AI and creating a reasonable environment for AI to be developed and deployed.

Related capabilities

subscribe

Interested in this content?

Sign up to receive alerts from the A&O Shearman on technology blog.