Article

AI leadership: Practicing what we preach

AI leadership: Practicing what we preach
There are two strands to our world-class AI group: AI advisory, and our expertise in building and deploying AI-enabled solutions. Here, three partners explain how this strategy benefits the firm and our clients.
testing

Marrying subject expertise with technical expertise

Named the world’s most innovative law firm by the Financial Times, we’re redefining what it means to be a trusted advisor in artificial intelligence. We’re not just advising on the future, we’re helping to build it.

The breadth and depth of our expertise, combined with the insights generated from building and deploying systems, puts us at the forefront of the marketplace with our AI adoption and advisory capabilities.

Clients come to us for advice because we understand the latest technology, having gained first-hand and first-mover experience. In 2022, we became the first firm to implement generative AI at an enterprise level.

Through our Markets Innovation Group (MIG), a practice comprising lawyers, developers, and technologists, we continue to innovate by building generative AI solutions used internally and directly by our clients. Francesca Bennetts, a London-based partner in MIG, explains: “We have used the last three years to embed the use of generative AI in our lawyers’ day-to-day work. We’ve given our people access to the best tools and provided extensive training, while developing and sharing best practices.”

Because our lawyers use generative AI themselves, we can apply their practical experience to inform and refine our products. Our extensive research and development includes testing and developing bespoke methods such as advanced prompt engineering techniques and workflow to ensure our solutions are tailored to our specific needs and those of our clients.

“Where we are unique is that we have invested a significant amount of lawyer time in developing these tools. They marry our legal subject matter expertise with deep technical expertise, which creates a more evolved product.”

ContractMatrix, built in collaboration with Harvey and Microsoft, is the firm’s flagship tool. Our lawyers are using it to speed up the drafting, reviewing, and negotiating of contracts.

All of our lawyers have access to ContractMatrix, and it is also available as a product for clients to license. In July 2025 we launched Analyze, which allows users to leverage generative AI to streamline reviews and align documents with predefined playbooks or policy requirements.

Most recently, MIG developed ContractMatrix Vantage, another lawyer-led tool, which uses generative AI to conduct complex, precise legal analysis across large portfolios of documents.

“This isn’t generic generative AI. We are distilling 20–30 years’ worth of legal, market and product knowledge from our lawyers’ brains to create solutions that deliver the expertise of the firm within a technology tool. This is functionality that is designed and tested by lawyers to provide a workable, fully tested solution to our clients.

“What’s really exciting is seeing previous sceptics become passionate advocates because they can see what we can do with the tools we have built. Now, they are motivating their associates and juniors to engage proactively with AI.”

MIG works with lawyers across the firm to identify opportunities where we can deploy generative AI most effectively. One current example is in connection with the Capital Requirements Directive VI (CRD VI), EU financial services rules that will enter into force in January 2027. Together, our Regulatory, Finance and MIG teams are building a ContractMatrix Vantage module to handle loan transfer reviews at scale.

“We’ve been proactively building a prototype because we know from experience how difficult it can be for our clients to implement regulatory change projects. We’ve spent time upfront thinking about what an AI-enabled solution could look like, building a prototype, and testing it. Now, we’re able to share the results of this investment directly with clients,” concluded Francesca.

“These tools marry our legal subject matter expertise with deep technical expertise, which creates a more evolved product.”

Navigating diverging global frameworks

A&O Shearman’s global coverage, combined with its own internal use of AI, means it can offer clients deep expertise.

“Where our clients struggle is in how to comply with often disparate AI regulatory regimes across the world,” says Peter van Dyck, head of Belgium digital, data, IP and technology.

This has led to a flurry of AI advisory and transactional work. For example, we are advising a client on a large multi-jurisdictional project to implement a governance and compliance program; a major life sciences player on the regulatory requirements and limitations for using AI in drug discovery worldwide; and a leading financial institution on reviewing its global AI contracting templates, contract handbooks and policies.

“For these types of multi-jurisdictional projects, we need intimate knowledge of various legal regimes, together with an in-depth understanding of the best practices in each of the relevant markets. Clients come to us because we can offer this unique combination of global and local expertise.”

A&O Shearman often leverages legal tech when advising clients. As one example, Peter highlights the firm’s self-service AI classifier tool, which was developed by multiple offices led by Luxembourg. Combining tech and legal expertise, it enables clients to answer a set of questions (such as how the systems work and what purposes they serve) to find out which risk level their systems would be classified as under the EU AI Act. The higher the risk level, the more stringent the rules under the Act.

Normally, he points out, establishing whether a system is prohibited risk, high-risk, or limited risk would take hours of manual analysis by legal teams. Using the tool, a company’s business team can complete the task without needing specialist knowledge, saving time for legal departments.

“Clients come to us because we can offer this unique combination of global and local expertise.”

The EU's AI regulation: Has the "Brussels Effect" worn off?

In recent years, the EU has seen a wave of complex and interrelated regulations such as the AI Act, Data Act, Data Governance Act, and Digital Services Act.

“We’re seeing a lot of our clients struggle with how these frameworks all fit together, and how they can comply with all of them at the same time,” notes Peter. The EU legislator hailed the EU’s AI Act, which started becoming applicable in 2024, as the world’s first major horizontal AI regulation. There were high hopes that other major markets would soon adopt a similar AI regulation.

“So far, we have not yet seen much of the so-called ‘Brussels Effect’,” Peter says.

“In fact,” he continues, “we’ve seen the opposite happening, with the U.S. focusing on deregulation and the UK taking a ‘wait and see’ approach.”

This leaves clients with a disparate set of frameworks to follow, and the EU’s future AI capabilities in question.

There are signs that the EU itself may now be changing gear. Earlier this year, the bloc withdrew the European Liability Directive, a set of rules governing the liability of AI systems. In addition, the EU is also considering further simplifications of the regulatory regime. Most importantly, the Commission published a Digital Omnibus Directive on November 19, 2025, which—together with several other simplifications—paused the implementation of the AI Act’s provisions on high-risk AI systems for a period of up to 16 months.

“This is positive news for our clients. The EU is now taking a critical look at whether the set of regulations they’ve imposed is fit for purpose, and seems to be open to revising and simplifying some of them. This is significant and gives our clients some much-needed breathing room.”

“The EU is now taking a critical look at whether the set of regulations they’ve imposed is fit for purpose, and seems to be open to revising and simplifying some of them. This is significant and gives our clients some much-needed breathing room.”

U.S. regulation straddles patchwork of state rules and shifting federal approach

In the U.S., the federal government is not following the EU’s approach to regulating AI.

While the Biden Administration had issued executive orders setting forth guardrails for the development and deployment of AI technology, the Trump Administration revoked these. It then issued its own executive orders focusing on the domestic expansion of AI technology.

Alex Touma, a San Francisco-based partner, noted that these new executive orders are focused on decreasing the barriers to innovation and positioning the U.S. as a global leader in AI technology.

In November 2025, President Trump called for federal oversight of AI to counteract what he described as “over-regulation by the States” that risked hampering growth and embedding “DEI ideology.”

While some Republican lawmakers would back a single standard, others such as Florida Governor Ron De Santis do not. He warned that such a move could act as a “subsidy to Big Tech” and “prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights and data center intrusions on power/water resources.”

For now, AI regulations in the U.S. consist of a patchwork of state laws. These include new regulations directly targeting AI technology, as well as existing state laws in areas such as data privacy that are also applicable to AI.

In September 2025, California passed SB 53, regulating AI frontier model developers by requiring them to provide transparency, implement safety measures, and take mitigation steps in the event of catastrophic risks associated with AI tools. The framework also provides guidance on penalties for violations and whistleblower protections. California Governor Gavin Newsom had vetoed a stricter version of the regulation in 2024, and the state eventually passed a more pared-down version following consultation with technology companies and policymakers.

Earlier in 2024, Colorado passed the first comprehensive AI law targeting high-risk systems that make consequential decisions (such as those regarding education, financial services, housing, and legal services). Other states have enacted a variety of measures including preventing AI from being discriminatory and imposing transparency requirements.

“This patchwork of regulations may make it harder for smaller and less sophisticated companies to comply. But this was already the case for other legal areas such as data privacy and biometrics.”

testing

Complex advice across practices, regulatory regimes, and jurisdictions

The firm, continues Alex, has clients of varying sizes seeking advice on the legal risks arising from AI and compliance across all areas of the law.

“We assist clients with developing and deploying AI products in a manner that complies with regulations, such as obtaining and using data to train generative AI, end-user disclosures, and risk mitigation policies and measures.”

On the commercial side, the firm assists clients by drafting and negotiating agreements related to the development and use of generative AI services. This includes helping big Silicon Valley players on the best approaches to selling their AI services and engaging in business partnerships.

Other clients need help implementing governance frameworks to set boundaries on how employees can use AI across business functions including engineering, HR, communications, legal, and business, which all have different risk profiles.

For example, one client wanted to develop an AI tool for use by their in-house employment team. The firm then counselled them on how the New York City local Law 144 would apply to automated decisions made by the AI tool, Alex says.

We are also helping a large financial institution develop AI agents, which will provide financial advice and services.

The work will involve input from a cross-practice and global group of firm attorneys, including those specializing in technology transactions, financial services, data privacy, and cyber security.

The objective is to ensure that stakeholders understand the risks in deploying this technology globally and to help the client implement safeguards with respect to development and deployment of this technology. This includes ensuring that the AI technology does not provide output that goes beyond the type of information that is permitted under applicable law.

Alex explained that because the client operates in a heavily regulated industry, much of the focus is on regulatory compliance and educating customers on the risks inherent to using AI technology in connection with financial services.

“These are sophisticated companies deploying very complex products across many jurisdictions with varying regulatory regimes, which requires the input of many practice groups. Only large global firms like A&O Shearman can advise on this level of complexity,” says Alex.

Because there isn’t always a precedent for processes like architecting systems, setting policies, or dictating terms, lawyers are able to work alongside clients to brainstorm and determine best practices.

Alex has also worked on M&A transactions assisting clients on acquiring AI technology, with due diligence covering areas such as the target’s development practices including the source of data used to train AI technology, and the policies and contracts governing it.

Another typical matter is advising large technology companies that develop the technology on practices relating to how they source data (directly and through partners) to train AI tools, an area rife with litigation. This includes analyzing the laws of each application’s jurisdiction and determining whether these data practices raise any issues.

Given the fast pace of changes to regulations, the firm keeps clients updated through its tech blog, email alert service, and direct contact. However, Alex observes that despite significant volume of litigation around AI technology (including IP infringement risks arising from the use of training data), sector companies appear to have a higher tolerance for risk given the race to develop and deploy the technology.

In a rapidly evolving regulatory landscape, A&O Shearman’s dual approach—combining hands-on AI innovation with deep legal expertise—positions the firm to guide clients through complexity with confidence.

“These are sophisticated companies deploying very complex products across many jurisdictions with varying regulatory regimes, which requires the input of many practice groups. Only large global firms like A&O Shearman can advise on this level of complexity.”