Article

Artificial intelligence: Risks and value-drivers in M&A

Artificial intelligence: Risks and value-drivers in M&A
Published Date
Apr 30 2026

Developing AI capabilities is a strategic focus for every company, with acquisitions one of the primary ways businesses can rapidly move up the technology curve. Here we explore the key issues that boards need to focus on to lock in value and manage risk in AI-related deals. This memo forms part of a series examining critical legal and regulatory decision points, opportunities, and risks facing leaders in an increasingly uncertain global business environment. 

In brief

We are at an inflexion point in the AI transformation. True transformation (as with any platform shift) takes time, and businesses are moving from AI pilots to deeper AI-driven changes to working processes and systems.

M&A is a key feature of this transformation. Many board-level AI strategies include targeted M&A as a means to access key AI assets (people, data, technologies, IP). On the equity investment side, AI assets continue to be heavily targeted by venture capital and private equity firms.

However, AI investment opportunities are increasingly hard to value. Technologies are advancing by the week; geopolitical shifts are driving more cross-border fragmentation and creating supply chain risks; the wider risk landscape around AI is vast and increasingly challenging to govern as AI agents proliferate and AI tools become democratized; and “sovereign AI” has become a board-level business imperative.

AI diligence must be technology-led, grounded in a sophisticated understanding of the specific technologies, the target’s competitive moat, and real-world drivers of risk, regulation and liability exposure. It should address IP, data, regulatory, contract, export controls/trade, cyber and antitrust risks.

Critically, specialist legal “pre-diligence” of AI assets is required to evaluate AI assets early—before term sheets are signed or initial internal approvals are obtained. This ensures that the real technology, regulatory and commercial risks can be identified before significant resources are committed.

What is unique about AI in M&A?

  • AI has become a core strategic driver for M&A deals as businesses race to scale their capabilities. AI deals are competitive and fast-paced, requiring boards to focus on key risks and value drivers while moving at speed.
  • Even where the target is not an AI company, AI issues will arise in almost every M&A deal as targets are likely to be developing and/or deploying AI.
  • The real risk and value equation around AI can only be solved by applying a forward-looking approach to diligence based on the acquirer’s future AI strategy as well as anticipated technological, commercial, and regulatory developments. The rapid advance of AI (see the recent proliferation of AI agents and democratization of the platforms on which to build them) and the policy and commercial landscape around AI means that investment decisions must be future-proofed against this constant change.
  • Geopolitical issues, regulatory divergence, and concentration among a few providers raise questions as to a business’s AI sovereignty, i.e., its ability to preserve strategic control and choice across its AI ecosystem (an issue we explore in more detail here).

Why is it important to understand the target's place in the AI value chain?

  • The AI value chain is increasingly complex. It spans a core chain of contractual and commercial relationships between providers of compute and other AI infrastructure (and of energy to power that infrastructure); network connectivity; platforms and aggregators; applications and end users. Around each of these roles is a wider ecosystem of financings, partnerships, suppliers, service providers and content owners. (You can read more about the AI value chain here). Understanding the target’s positions in the value chain is a prerequisite for assessing its competitive moat as well as upstream/downstream dependencies and implications for AI sovereignty and regulatory compliance.

How should boards approach due diligence?

  • A technology-led approach to due diligence (DD) is critical to evaluate an asset’s future value, scalability, and adaptability. Boards must understand whether the AI capability being targeted is real or overstated, the precise methods used to develop it, whether the target’s data and IP assets are defensible, if there are steps in place to manage key person risks (particularly where access to people is the real driver for the deal), regulatory and contractual exposure in relation to future business models, and the real-world opportunities and challenges around scale and future use cases.
  • This can only be achieved by bringing together experts in diverse legal areas and AI technologies and conducting a modular, tiered DD process which elicits information about the target’s AI systems and AI use cases in stages. This avoids an unnecessarily complex and costly DD phase.
  • Diligence should also be jurisdictionally sensitive, (e.g., to IP and data privacy risks, as well as regulatory exposure).

What should boards look for in the technology?

  • There are many features of AI technology that will affect legal risk, including in ways that impact valuations or even whether or not to proceed with a deal.
  • Legal risk assessment starts by identifying the model and system architecture, including third-party dependencies. Those dependencies may be datasets, code, or, in the case of AI applications, AI models that are integrated into the application. Linked to that, the specific techniques used to train or develop an AI system also drive the legal risk and competitive moat around the system.
  • Another key focus area relates to the guardrails and controls that are built into the system to detect and mitigate failures and associated legal risks. Businesses are grappling with how to deploy AI agents at scale while also managing risk. Traditional risk management techniques (based on “human in the loop”) do not work for AI agents; instead, the frontline risk mitigant is to build guardrails into the system along with real-time failure detection. This is a system question, rather than one of governance.

How should fragmented regulatory regimes be considered on deals?

  • AI-specific laws (e.g. the EU AI Act) often take the headlines, but the regulatory landscape that surrounds AI is far broader. It encompasses a host of technology-agnostic laws which are rapidly being tested and updated to meet AI-specific challenges (e.g. IP, cybersecurity, privacy, antitrust, foreign investment, trade, consumer protection and more).
  • New AI laws are being passed constantly, with each government’s legislative agenda being driven by their specific geopolitical objectives.
  • Legal diligence is never a proxy for a compliance audit. Instead, it is a question of assessing the target’s compliance posture and maturity, and likely indications of deficiencies that may pose future risk. A mature compliance strategy: (i) focuses on the target’s key markets and AI systems; (ii) embeds core principles (e.g., transparency, security and accountability); and (iii) bridges their liability exposure at law via risk allocation in contracts.

What are the main IP issues?

IP infringement

  • IP infringement risk arises at almost every stage of AI development and deployment. IP rights are national rights, and so need to be considered in each relevant jurisdiction.
  • Much of the policy and legal risk in relation to IP is focused on copyright, which is the main form of IP that subsists in the content and data used to train and deploy AI.
  • There is still a huge amount of uncertainty in this space, with more than 100 significant copyright infringement lawsuits underway globally. Each one will turn on its facts, and it will take years for broadly applicable positions to emerge. In the meantime, governments are passing and consulting on significant changes to copyright laws, designed to be variously AI-developer- or creative-industry-friendly, depending on the country.
  • A deployer may risk infringing IP rights by using outputs from a model that has been trained by a third party using unauthorized copyrighted works, given that many types of AI system can reproduce training data as outputs. This risk is jurisdiction-specific. Appropriate disclosures and contractual assurances should be secured from the vendor. There is also significant policy focus on transparency around training datasets used by AI developers, with relevant legal requirements in both the EU and U.S.
  • A business using its own data to develop or deploy AI must assess how it was sourced and/or licensed and any potential infringement risk.

Ownership

  • Questions around IP ownership relate to both AI systems and AI-generated outputs.
  • For AI developers, DD should focus on the target’s strategy to protect its AI innovations. An AI system comprises many different components (e.g., algorithms, data, functionalities, user interfaces and source code) and there is no single form of IP protection. Often, there is a strategic choice between trade secrets and patents, and that choice is not straightforward.
  • AI-generated outputs will not be copyrightable in almost all jurisdictions. Copyright would only arise if the outputs have been modified sufficiently by a human author or if an argument can be made successfully that the elaborateness of the prompting of the AI model confers on the output sufficient human creativity. This latter argument has been successful in China, albeit roundly rejected in the U.S.
  • Diligence on IP matters focuses on the steps taken by the target to reflect these issues in its IP protection and contracting strategies.

What are the main data privacy issues?

  • Many of the steps involved in building and deploying AI involve processing personal data in ways that are incompatible with the fundamental principles of privacy laws globally, (e.g., accuracy, transparency, lawfulness, data minimization, purpose limitation) and data subject rights (e.g., the right to be forgotten). These tensions will arise across the AI lifecycle depending on the personal data processed at each stage (e.g., training, inputs, outputs). Indeed, in many ways, existing privacy laws in the EU are more likely to be problematic than the EU AI Act, despite the latter gaining more media attention.
  • Regulatory clarity is a long way off, with legislators in the EU proposing targeted changes to the General Data Protection Regulation (GDPR) to address some areas of tension.
  • Deployer targets will look to their developer vendors’ compliance with privacy laws, and contractual assurances should be obtained. Deployers must also take their own steps to mitigate risk (e.g., governance around use cases, personal data in inputs and outputs, and their own transparency requirements).

What are the main contractual issues?

  • AI is driving huge changes in the commercial contract landscape, with almost every type of commercial arrangement impacted by AI. Counterparties need to address the new, specific risks raised by AI both in their existing agreements and in the various new types of agreement that are required to build, access, and deploy AI (e.g., AI-as-a-service (AIaaS) agreements, AI model development agreements, strategic AI partnership agreements etc).
  • Most businesses will be in the “squeezed middle”, between downstream end users of their AI systems or AI-generated outputs, and upstream AI system/model providers. These squeezed middle organizations are in an uncomfortable position in many ways and must assess and manage a wide range of potential delta risks.
  • Material upstream contracts include foundation model licenses, data licenses, development agreements, and graphics processing unit (GPU) leases. Key focus areas here include IP infringement and ownership, data usage rights, liability allocation, regulatory compliance, and privacy. Open weight AI licenses must also be reviewed for usage restrictions.
  • The proliferation of AI agents is also driving rapid change in the market approach to liability allocation and accountability in AI license agreements. Diligence should focus on the target’s maturity in assessing this in its key contracts.

What are the main cyber issues?

  • AI is driving new and enhanced cyber risks that are now a key concern for businesses when deploying AI. AI systems are both improving existing attack methods (e.g., better deepfakes to support phishing attacks) and increasing the attack surface for cyber criminals. As an example, attackers are exploiting the fact that AI systems respond to the data that they see via “prompt injection” attacks which insert (often hidden) content into a user’s prompt. This hidden content may contain malware or an instruction to the AI system to access and export confidential data.
  • Such risks are amplified for AI agents.
  • A target’s cybersecurity strategy should be probed for its maturity and posture to these sorts of attacks, particularly where it builds or deploys AI agents. Vendor contracts should include secure software supply chain processes.

What are the main antitrust issues?

  • In a diligence context, a growing issue for businesses surrounds the increasing reliance on a small group of AI developers, which gives rise to risks around concentration and lock-in. This is a particular concern in regulated sectors or industries where that reliance could inadvertently have a market impact or result in price collusion. This is an area of focus for antitrust regulators globally.
  • Merger control regulators are increasingly focusing on AI deals, including non-traditional structures such as partnerships and “acqui-hiring”.
  • Regulators are focused on ensuring deals allow ongoing access to critical inputs, sustained diversity of business models, fair dealing, and sufficient choice and transparency for consumers and businesses.

How are the structures of the deals changing?

  • In many AI investments, the core commercial objective is not outright control but preferential access to scarce AI capabilities (e.g., models, data and compute). This is driving increased focus on bespoke governance rights and commercial protections in investment documents, including field-of-use exclusivity, priority access to model updates, and veto rights over licensing to competitors.
  • IP and data rights are increasingly negotiated with the same intensity as equity economics. Investment terms are often closely interlinked with commercial arrangements (e.g., data licenses, model access agreements and development partnerships), with investors seeking enduring rights to use, train on or benefit from key datasets and model outputs. In some cases, these rights form a central part of the value proposition, as seen in large technology players’ equity-linked commercial programs.
  • Given the difficulty of valuing AI assets and the pace of technological change, consideration is increasingly structured around technical and commercial performance  metrics. These may include model performance, product deployment, user adoption or scalability milestones, and move beyond traditional financial earn-outs to mechanisms more closely aligned with the underlying technology and its real-world application.
  • Focused representations and warranties help to flush out disclosures relating to the risk areas highlighted above, as well as allocating liability should the statements being warranted prove untrue (subject to disclosure).
  • As with the early development of data protection risk allocation, there is a trend towards the use of escrows, holdbacks and specific indemnity structures to address difficult-to-quantify AI risks (e.g., data provenance, IP infringement and regulatory exposure). These mechanisms are being used to bridge gaps where risks are known but not fully diligenced or quantifiable at signing.
  • Investors should consider the insurability of AI risks and the feasibility of underwriting given exclusions for novel AI risks. Specific indemnities should also be considered for known issues, such as known copyright lawsuits or privacy complaints.
  • Minority and strategic investors are seeking deeper visibility into, and influence over, the evolution of AI systems. This includes enhanced information rights (e.g., access to product roadmaps, model updates and training approaches) and governance rights around key decisions affecting model development, deployment and commercialization, reflecting the importance of these factors to long-term value.
  • The pace and competitiveness of AI transactions often limit the ability of all investors to conduct full diligence. As a result, there is increased reliance on the lead investor’s diligence approach and risk assessment, with co-investors placing greater weight on alignment of interests, information-sharing and the credibility of the lead’s underwriting of key AI risks.

How should boards manage stakeholders?

  • Boards should validate that external messaging aligns with compliance realities and does not overpromise capabilities or understate risks (i.e., avoiding “AI washing”).
  • Plans should exist for regulator inquiries, customer reassurances, and investor communications, and adapted as the regulatory landscape evolves.