Opinion
Zooming in on AI - #5: AI under financial regulations in the U.S., EU and U.K. – a comparative assessment of the current state of play: part 1
Rapid and accelerating developments in artificial intelligence have prompted governments around the world to consider how AI should be regulated and used responsibly by businesses, without stifling innovation.
This is particularly the case in the financial sector, where AI has the potential to bring operational efficiencies and even improved investment performance, but also brings with it risks due to the inherent unknowns that come with new technology. AI (as now defined) has in reality been widely adopted for many years in the financial sector, including for text transcription, chatbots and helpdesks and data analytics. However, there are many potentially novel applications where AI has the ability to replace roles traditionally performed by humans.
Governments and regulators are concerned with mitigating risks associated with AI—for example, ensuring that the use of AI by businesses is safe and transparent with proper systems and controls. However, approaches in different countries have differed drastically. While the EU has the most comprehensive AI-specific legislative measure in the AI Act with detailed regulatory requirements in particular for high-risk AI systems, the U.S. and U.K. have thus far adopted more of a common law approach of addressing risks as they arise or become apparent, predominantly using tools under existing technology-neutral legislation to issue policy pronouncements, supervise firms using AI and manage any issues.
This is the first in a series of three publications, in which we will compare the current approaches for regulating AI in the financial sector in the United States, the European Union and the United Kingdom. In this note, we consider at a high level the differing approaches to and principles of regulation. The second part will look at scope, extraterritoriality, data and third-party service providers. The final part will consider differing approaches to enforcement and remedies as well as liability.
Firms should carefully monitor developments across these and other relevant countries and consider for their business the recommendations in the Action Plan at the conclusion of this note.
Current approaches to regulating the use of AI
This table summarises at a high level the current approaches to regulating the use of AI in the U.S., the EU and the U.K., presenting a summary of all of the topics that will be addressed across the series of notes.
U.S. | EU | U.K. | |
---|---|---|---|
Specific AI legislation, regulation or policy |
|
|
|
Approach |
|
|
|
Key Principles |
|
|
|
Scope |
|
|
|
Data governance / processing |
|
|
|
Extraterritoriality |
|
|
|
Third-party providers |
|
|
|
Fines/ enforcement |
|
|
|
Remedies |
|
|
|
Liability |
|
|
|
*Issued by or under the U.K.’s previous government. The principles noted above derive from the AI White Paper, also issued under the U.K.’s previous government.
General Approach & Principles
There is a rapidly changing ecosystem of laws and regulations applicable to the development and use of AI. In some countries there is broad AI-specific legislation, such as the EU.’s new AI Act, which applies alongside a wide range of existing laws, such as on intellectual property, data protection and privacy, financial services, antitrust, cybersecurity, consumer protection and others may apply. However, in most countries, there is no AI-specific legislation and AI-related matters are governed only by these existing laws. Countries around the world are currently considering if any of these existing laws require changes to address the novel questions and challenges raised by AI. Many jurisdictions have developed general principles for regulating AI, and these enshrine similar rights such as transparency, fairness and human oversight.
Some countries have also signed the AI Convention, namely, Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova, San Marino, the United Kingdom, the United States of America and the European Commission. The AI Convention, which is the first legally binding international agreement on AI, will enter into force once there are five ratifications. It sets fundamental principles for activities within the lifecycle of AI systems, prescribes remedies, procedural rights and safeguards and requires risk and impact management. Many of the principles align with those in the EU’s AI Act, such as transparency, oversight, accountability, data privacy, reliability and safe innovation. The AI Convention apples to both public authorities and public actors. In its application to the private sector, parties to the AI Convention may opt for it to apply directly or implement their own measures.
U.S.
In the United States, there are currently no comprehensive AI-specific laws at the federal level, though there have been more limited laws passed in this space, including laws to coordinate the U.S. government’s use of AI and state AI laws. As a result, and consistent with the financial regulatory approach in the U.S., there are efforts at the federal and state levels to both apply and enforce existing laws and regulations to AI and to develop new rules where there are gaps in the existing regulatory landscape. U.S. agencies have already begun enforcement efforts, including related to so-called “AI washing” and AI disgorgement for improperly collected data, and indicated an increased enforcement focus on AI and other emerging technologies. We discuss enforcement and fines in the third part of this series.
The federal and state governments are focused on mitigating risks arising from both the public and private sector’s use of AI, including those related to privacy, fair use and ensuring appropriate disclosures to the public. In addition, reflective of geopolitical pressures, a core emphasis is national security and ensuring that AI technology is not weaponized against the U.S. in either a military or commercial sense—both as regards outbound investment and exports through restrictions on sensitive technologies, including AI, and in-bound investment from the Committee on Foreign Investment in the U.S. We discuss the latest developments in our August note, “Sanctions and export controls expand further.”
A plethora of bills have been introduced in various Congressional committees with bi-partisan support, and the House and Senate have set up bipartisan task forces, working groups, and congressional hearings to better understand AI policy priorities and further coordinate legislative efforts.
Most recently, on May 15, 2024, a bipartisan Senate working group issued a report entitled Roadmap for Artificial Intelligence Policy (the “Roadmap”), which addressed eight key policy areas: (1) supporting U.S. innovation in AI; (2) AI and the workforce; (3) high impact uses of AI; (4) elections and democracy; (5) privacy and liability; (6) transparency and explainability; (7) intellectual property and copyright; and (8) safeguarding against AI risks.
Notably for the financial services sector, the Roadmap calls for the creation of a comprehensive federal data privacy framework related to AI that can be applied across multiple sectors. The Roadmap specifies that this data privacy framework should include provisions addressing data minimization, data security, consumer data rights, consent and disclosure, and data brokers. The Roadmap encourages relevant Senate committees to develop legislation that ensures financial service providers are using accurate and representative data in their AI models. The Roadmap also supports a regulatory gap analysis in the financial sector—which was also proposed by the bipartisan Artificial Intelligence Advancement Act introduced in the Senate in October 2023.
In the absence of comprehensive Congressional action on AI, the Biden Administration has sought to take the lead by issuing a broad-ranging Executive Order on AI in October 2023—Executive Order 14110, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Executive Order 14110 directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas: (1) safety and security; (2) innovation and competition; (3) worker support; (4) AI bias and civil rights; (5) consumer protection; (6) privacy; (7) federal government’s usage of AI; and (8) international leadership. Executive Order 14110 highlights areas of focus for the enforcement of existing regulations and directs agencies to conduct studies, publish reports and develop guidance around AI. See A&O Shearman on Tech, “Biden Administration Issues Broad Executive Order to Regulate and Advance Artificial Intelligence.”
Under the direction of Executive Order 14110, the U.S. Department of the Treasury issued a public report on best practices for financial institutions to manage AI-specific cybersecurity risks (the “Treasury Report”). The Treasury Report is a digest of AI use cases, threat and risk trends, governance and cybersecurity best practice recommendations, and challenges and opportunities for financial institutions, incorporating 42 in-depth interviews with various industry stakeholders. The Treasury Report outlines the current regulatory landscape applicable to the use of AI in cybersecurity and fraud management by financial services firms. These regulatory expectations, in turn, closely track best practices shared by participating financial institutions for mitigating AI-related cyber and fraud risks. These best practices include incorporating AI risk management within existing enterprise risk management programs; mapping data supply chains; proper due diligence of vendors; maintaining high levels of cybersecurity, especially around data; and having the right risk tolerance for both the specific use case and the overall risk appetite of the firm.
Other U.S. federal agencies have also begun to interpret and provide guidance on how existing laws and regulations apply to AI and consider new rules for AI within their respective jurisdiction. Such agency efforts to address AI may be forestalled in light of the U.S. Supreme Court’s June 2024 decision in Loper Bright Enterprises v. Raimondo to overturn a long-standing doctrine that instructed courts to defer to reasonable interpretations made by administrative agencies. It is likely that any agency rulemaking on AI will be closely scrutinized both by the public and by courts. These agency efforts include:
- Securities and Exchange Commission (SEC)
Gary Gensler, the Chair of the SEC, made public statements concerning the use and potential risks of AI technologies in the securities industry, and specifically identified four key areas of concern: (i) the potential for conflicts of interest; (ii) the potential for fraud and deception; (iii) the impact on privacy and intellectual property issues; and (iv) the impact on financial stability. This speech was soon followed by the SEC’s new proposed rulemaking on “predictive data analytics” that would among other things, require broker-dealers and investment advisers to eliminate or neutralize the effect of certain conflicts of interest associated with their use of AI and other technologies. It seems unlikely to be finalized in the near future, as the SEC has announced that it is likely to re-propose the rule.
The SEC has also proposed rules addressing outsourcing of certain covered functions by investment advisers and cybersecurity risk management rules for investment advisers and broker-dealers. For example, in May 2024, the SEC finalized amendments to Regulation S-P, which governs how certain financial institutions treat consumers’ non-public personal information. The amendments were intended to help protect investors’ privacy from the “expanded use of technology and corresponding risks.”
In the absence of final AI-specific rules, the SEC’s efforts indicate that it is considering using existing regulatory provisions to address risks the SEC perceives with respect to AI. Investment advisers and broker-dealers are required to implement policies and procedures designed to prevent violations of the federal securities laws. Furthermore, under SEC rules such as Regulation S-P and Regulation S-ID, broker-dealers, investment advisers and investment companies must take certain steps to safeguard customer information and appropriately respond to red flags related to possible identity theft.
- Commodity Futures Trading Commission (CFTC)
The CFTC issued a request for public comment on a wide range of AI-related questions in January 2024. In May 2024, the CFTC’s Technical Advisory Committee recommended that the CFTC develop an AI Risk Management Framework governing the use of AI in financial markets. In developing the framework, the committee said the CFTC should hold public roundtables, conduct a “gap analysis” of existing regulations, and generally aim to align with other financial regulators and the National Institute of Standards and Technology. The committee highlighted use cases and related risks for AI concerning trading and investment; customer communications, advice and service; risk management; regulatory compliance; and back office and operations.
In accompanying statements, Commissioners Kristin Johnson and Caroline Pham emphasized identifying existing CFTC regulations that may address AI-related risks, for example by looking at the existing approach to risks and controls in algorithmic trading. The CFTC has not proposed any AI-specific rulemaking.
In June 2023, the CFTC formed the new Cybersecurity and Emerging Technologies Task Force within the CFTC Division of Enforcement, which will address “cybersecurity issues and other concerns related to emerging technologies (including artificial intelligence).”
- Banking Regulators
On June 6, 2023, the Federal Reserve, Federal Deposit Insurance Corporation and Office of the Comptroller of the Currency released final Interagency Guidance on banking organizations’ management of risks associated with third-party relationships which, while not specific to AI, is highly relevant, and is discussed further in part two of this series. The federal banking regulators have not otherwise released any guidance or rulemaking specific to AI. However, general principles of safety and soundness apply to any use of AI.
- Consumer Financial Protection Bureau (CFPB)
The CFPB has produced guidance, reports, and proposed rules related to the use of AI in certain contexts, mostly related to consumer credit. For example, it has issued guidance noting that creditors that use AI or complex algorithms in aspects of the credit decisioning must still provide a notice to consumers that discloses the specific reasons for taking adverse action, and that creditors must be able to explain the specific reasons for their credit decisions, including when using AI. The CFPB has also published a report highlighting the potential issues and consumer harm arising from the use of AI chatbots.
- Federal Trade Commission (FTC)
Where companies do not collect personal data in accordance with the law, and they use illegally-collected personal data to train AI, the FTC has in some cases required not just the deletion of the ill-gotten data but also the destruction of the AI that was trained using this data. This penalty has been imposed in six cases to date. The FTC has not issued guidance regarding when it may impose this disgorgement remedy.
Additionally, numerous individual states have passed or are considering stand-alone AI laws as well as comprehensive privacy laws which apply to automated processing via AI. According to the National Conference of State Legislatures, at least 45 states and Washington D.C. introduced AI bills this year, and over 30 states have adopted resolutions or enacted legislation pertaining to AI. State stand-alone AI laws (such as in Colorado or Utah) include regulations on generative AI decisioning (without meaningful human oversight in decision-making), in critical areas such as provision of health care services, provision of insurance, education admissions, employment decisions, and provision of loans and other financial services. Notably, California is considering an AI regulation bill, which—if signed into law—would require powerful AI models to undergo safety testing prior to being released to the public and would authorize the state’s attorney general to hold developers liable for serious harms caused by their AI models. We discuss California’s draft Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in “Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?”.
Furthermore, many states now have some type of data protection law or privacy law. Comprehensive state privacy laws regulate automated processing and require notice and, in certain cases, consent. If sensitive information is processed by AI or if sensitive information is used to train AI, some states require data privacy impact assessments prior to commencing use of the AI tool.
Given the current ad hoc approach of addressing AI risks as they arise, the evolving landscape may make compliance for firms with U.S. operations or a U.S. nexus particularly challenging, and firms should closely monitor developments in this area and take into consideration the recommendations in the Action Plan laid out below.
EU
The EU has been the first to develop AI-specific legislation, with the AI Act setting legal requirements specifically for AI systems, focusing on high-risk AI systems. The AI Act is the most comprehensive attempt at regulating the technology undertaken by any legislature globally. The AI Act defines four main players in the AI sector—deployers, providers, importers and distributors. A single entity in this sector might fall within several of these categories. The AI Act also defines different types of AI systems according to the level of risk involved in the use of those systems. How practical this approach is remains to be seen. We discuss the different obligations applying to providers and deployers in “Zooming in on AI – #4: What is the interplay between “Deployers” and “Providers” in the EU AI Act?”.
The EU AI Act entered into force on 1 August 2024, and will for the most part apply directly across the EU from 2 August 2026. Certain provisions will apply earlier, for example, the prohibition on certain “unacceptable” AI systems will apply from 2 February 2025, GPAI models must comply from 2 August 2025 and the provisions on high-risk systems will apply from 2 August 2027. We set out more details on when various aspects of the AI Act will apply in, “Zooming in on AI: When will the AI Act apply?”
In the meantime, the European Commission has launched the AI Pact, which encourages industry to voluntarily start implementing the requirements of the AI Act before they are legally applicable. The Commission has conducted a targeted consultation on the use of AI in the financial services sector.
The approach of the AI Act to mitigating AI risks is discussed in, “Seizing the AI opportunity in Europe” and “EU AI Act: Key changes in the recently leaked text.”
U.K.
The U.K. has not yet adopted any AI-specific legislation. However, that may change under the new Labour government whose manifesto committed to introducing binding requirements on developers of the most powerful AI models (equivalent to what the EU AI Act defines as highly capable GPAI). This was reiterated in the post-election King’s Speech, which sets the legislative agenda for the next 12 months. In the meantime, the U.K. continues to rely on existing laws, which are generally technology-neutral, and regulatory pronouncements or guidance in some sectors. Matters are largely left to sector-based regulators, who must interpret and apply to their sectors the government’s AI principles. Regulators are encouraged to be transparent about the actions that they are taking.
The previous government’s strategy, set out in “A pro-innovation approach to AI regulation,” was presented as a “context-based” approach that focused on where and how AI is used. The approach is founded on common law principles of only imposing legal and regulatory obligations where necessary to address identifiable risks. It was also based on the five principles (set out in the above summary table), with a preferred approach of initially not putting those into statute. Certain regulators were requested to update their strategic approach to AI, including the financial services regulators. Regulators are also encouraged to develop their policy approach as needed, to issue guidelines and to use technical standards to assist AI developers and deployers to implement the principles. There is no indication that policy will change on these matters with the change of government. The previous government had also established an AI Safety Institute to carry out research on AI safety and develop and conduct evaluations on advanced AI systems. The House of Lords has indicated that it wishes the AI Safety Institute to be put on a statutory footing, although no bill has been proposed for this.
The financial services regulators’ approach to regulating AI used or intended to be used in the financial services sector is technology-agnostic, principles-based and outcomes-focused. Before the change in government, the U.K. financial services regulators described how the previous government’s AI principles fit with their rules, high-level principles and expectations, and how those apply to regulated firms using AI. These include:
- Safety, security and robustness
The Financial Conduct Authority’s (FCA’s) Principles for Business apply. For example, firms must conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems (Principle 3). Some of the Threshold Conditions apply – these are the minimum conditions a licenced firm must satisfy to obtain and maintain its licenced status. For example, a firm’s business model must be suitable, compatible with the firm’s affairs being conducted in a sound and prudent manner and consider the interests of consumers and the integrity of U.K. financial system. In the area of operational resilience, firms must be able to respond to, recover, learn from and prevent future operational disruptions.
- Appropriate transparency and explainability
High-level requirements and principles relating to the information firms must provide to consumers apply, including the Consumer Duty for retail business, and for wholesale business, the principle requiring firms to communicate information in a way that is clear, fair and not misleading (Principle 7).
- Fairness, which includes data protection
Various Principles apply, such as the Consumer Duty under which firms providing retail services or products, must act to deliver good outcomes for retail customers, and ensure this is reflected in their strategies, governance and leadership. For wholesale business, treating customers fairly (Principle 6) applies. For all firms, the principles of managing conflicts of interest (Principle 8) and respecting the customer relationship of trust (Principle 9) apply.
- Accountability and governance
The FCA’s Principles apply, in particular on management and control. The requirements for firms to have senior management arrangements, systems and controls as well as the Senior Manager and Certification Regime apply.
- Contestability and redress
For example, firms are required to have complaints handing procedures and policies.
The FCA notes that a more proactive approach to supervision is warranted where a firm uses AI systems. It has said that it would adapt by placing a strong focus on testing, validation and explainability of AI models, vigorous accountability principles, and openness and transparency. The regulators are monitoring the situation, including wider technology trends such as quantum computing, and future adaptations have not been ruled out.
The Bank of England’s Financial Policy Committee is engaged in considering how AI innovations may impact financial stability. The risks here include magnifying herding or broader procyclical behaviours, increasing cybersecurity risk and intensifying interconnectedness.
The U.K. ICO last year updated its guidance on AI and Data Protection to provide great clarity on fairness requirements.
Action Plan
A significant concern for companies adopting AI systems is how to control against unwanted outcomes, since the AI has the potential to operate unexpectedly in future or unknown factual situations. Linked to that is the question of where responsibility lies for AI and the actions required by registered individuals in senior management positions and legal entities. Companies can take steps to promote appropriate use of AI, including the following measures, which are broadly consistent with regulatory guidance and the EU.’s AI Act. For companies using or intending to use AI systems in the EU market, an early review is recommended to ensure compliance with the relevant requirements of the AI Act.
- Prepare and socialise internally an AI policy, that embeds the core principles of fairness and transparency with concepts of human oversight, explainability, security and safety. An AI policy should be modular with multiple layers to ensure accessibility and usefulness across the wide range of audiences.
- Develop an AI framework and governance structure that sets out clearly roles and responsibilities across the lifecycle of each AI system. If appropriate, establish a separate AI policy and compliance team. The framework should be orientated around specific use cases for AI systems and be interdisciplinary, combining the business, compliance, operations, infosec, IT and in-house legal in a single forum. U.K. financial services firms are required to have strong governance oversight with the board promoting robust risk management, and clear organisational structures that demonstrate transparent and consistent lines of responsibility.
- Undertake a mapping exercise of the AI systems in use and intended to be used (i.e., an AI inventory), and where it will be used and or placed, including third-party systems. Assess the potential risks involved for each AI system, including the level of risk, and map those against the relevant regulatory and legal requirements. This will include determining the firm’s role, taking into account a jurisdiction’s definitions and requirements. Document how each risk from an AI system will be controlled and mitigated. U.K.-incorporated banks, building societies and PRA-regulated investment firms approved to use an internal model for calculating their regulatory capital requirements must satisfy the Prudential Regulation Authority’s (PRA’s) Model Risk Management (MRM) Principles. The PRA is clear that the MRM Principles, which came into effect in May 2024, apply to AI models, including the requirement to provide a comprehensive model inventory. All U.S. companies, including those operating in the financial services sector, should consider enhancements to their compliance program to address the risks associated with the risks of AI.
- Adopt and implement measures to manage the risks. It may be helpful to do so thematically in the following three risk management pillars.
Use Case +
Clearly define the use case because it will drive the risks. For example, a system that is involved in pricing brings additional risks relating to price collusion that will not be relevant in other use cases, whereas a customer-facing chatbot that directly interacts with customers about financial products raises privacy and ethical issues that will not be relevant to, say, using AI to generate software. Assess whether using AI will result in a better outcome than the existing solution, taking into account relevant factors such as efficiency, cost, accuracy and security.
Operational
Implement operational steps to align and integrate AI into a business. This includes security measures (e.g. BYOK/encryption), configuration of the model and user profiles, and privacy enhancing technologies. The interdependencies between legal, operational and security stakeholders is greater than in non-AI based technology deployment.
Contractual
Contract terms help to mitigate legal risk, both in the contract between the organisation deploying an AI model and the model provider, as well as in contracts between an organisation and its customers. In negotiations with foundation model providers, there are likely to be red lines for each organisation, such as risk allocation or customer data/trade secrets not being exposed or used to train the models for others. - This note generally assumes that you are deploying AI in your business, but not developing models. Where businesses develop (e.g. training or customising, via fine-tuning or retrieval augmented generated (or RAG)) the risks change—and, in most cases, increase.
- Assess how AI is used by third-party service providers, and the impact of its use on the recipient’s business and clients. Consider the impact of any specific legal regulatory requirements.
- Conduct an audit of all existing commercial agreements to identify those requiring updates to address AI-specific risks. These will include, as a minimum, all service agreements and technology access agreements. Update and revise these agreements as necessary to include AI-specific protections relating to privacy, data usage rights, IP infringement, IP ownership, liability and indemnity clauses, compliance with laws and the recipient's AI policies. Ensure that regulators also have access.
- Establish a monitoring and review process to ensure ongoing risk mitigation and compliance.
For additional information, read, “Desire to harness potential of generative AI drives rising interest in data as an asset” in which Allen & Overy, now A&O Shearman, discuss steps for mitigating the risks involved in generative AI.
Elevate your AI strategy | Network with peers from global businesses
Are you ready to take your AI strategy to the next level? Join us for Phase 3 of the AI Working Group, where we will explore the latest developments and challenges in AI regulation, cyber security, and M&A. Whether you are in IP, data, cyber, tech, or life sciences, you will benefit from our market-leading, cross-practice, and multi-jurisdictional AI advisory practice.
Phase 3 starts in October 2024, including topics such as:
- Cyber Security and AI: How to navigate the heightened cyber security risk landscape in the context of AI, from incident prevention to response.
- AI Act Compliance: How to meet the specific compliance obligations under the EU AI Act in different scenarios, such as deploying, providing, or developing high-risk AI systems or general-purpose AI models.
- AI in M&A: How to conduct legal and strategic risk assessments for transactions driven by AI technology acquisitions.
If you would like more details about joining the AI Working Group, email AIWorkingGroup@AOShearman.com.
Don't miss this opportunity to learn from our AI experts and network with your peers from the largest global businesses across various sectors.
Related capabilities