Article

AI as a strategic asset: what boards need to know about AI sovereignty to stay ahead

AI as a strategic asset: what boards need to know about AI sovereignty to stay ahead
Published Date
Apr 30 2026
Related people
Image of Alex Shandro
Alex ShandroPartner, London
Praveeta ThayalanSenior Knowledge Lawyer, London

Three forces shape every business’s AI strategy: geography and AI’s geopolitical context; the increasing complexities of the AI value chain; and the evolution of AI technologies and their applications. Here we explore how these factors impact the ability of multinationals to generate value from AI and defend themselves from AI-enabled competitors—while informing their approach to AI deals and risk management. This article forms part of a series exploring the drivers of change in an uncertain world.  

In brief

AI is a strategic asset shaped by geopolitics, national security priorities, industrial policy and regulatory competition. 

Against this backdrop, “AI sovereignty” (a business’s or nation’s ability to control critical aspects of its AI capabilities) has become a key strategic concern.  

Decisions around AI sovereignty are shaped by geopolitical competition, regulatory divergence (which is impacting decisions on compute location and governance), compute scarcity, and chip supply constraints. 

Strategic choices are also defined by the evolution of AI technology. Here, immediate priorities cover issues such as the need to develop diverse partnerships and whether to self-build AI systems or advance through targeted acquisitions. Longer term, decision-makers need to consider the rise of agentic AI and the prospect of artificial general intelligence.

Boards need to consider where their business sits on the AI value chain, which parts of the chain are critical to control, which dependencies are within their company’s risk tolerance, and whether AI investments are future-proofed to maximize opportunity rather than simply manage risk. 

Artificial intelligence is not a neutral productivity technology. It is a strategic asset shaped by geopolitics, national security priorities, industrial policy and regulatory competition. Recent years have seen a proliferation of regulatory approaches, unprecedented state intervention, and countries and businesses racing to establish AI dominance. However, what “dominance” means in this context is different for everyone, with nuance everywhere.

Jensen Huang, CEO of Nvidia, popularized the notion of “AI sovereignty” in late 2023. He defined the term as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks”. This definition is predicated on a nation’s ability to have its own infrastructure and capabilities. In practice, in this literal sense of “ownership”, AI sovereignty is a misnomer. No nation or business can purport to own its end-to-end AI capabilities. The AI value chain is too complex, and even the U.S. and China—the world’s two AI superpowers—are dependent on materials, technologies, and inputs provided by others, including each other. 

Instead, AI sovereignty is really about control, i.e., a business’s (or nation’s) ability to control the critical aspects of its AI capabilities. This, in turn, is driving strategic choices—where to invest, what to build yourself, who to partner with, what to buy, and who manages and secures your data, intellectual property and compute. These decisions unlock a business’s ability to innovate rapidly while protecting its core assets and customers from the vast landscape of legal and business risk.

The conflict in the Middle East demonstrates how quickly core assets like data and compute can be put at risk when data centers are prime targets, and why geopolitics will no doubt continue to shape decisions on hosting locations in the period ahead.

In general, there is no “one-size-fits-all” approach. At one extreme, a “fully sovereign” AI stack would be locally hosted (either on-premises or in a private local cloud) and disconnected from any public or third-party infrastructure. This is unrealistic for all but the most critical public or government use cases. At the other extreme, a “low-sovereignty” solution would be reliant on public cloud services and standard terms, with no guarantees on location or jurisdiction. In the middle lie a range of “partially sovereign” options that combine global providers with local controls (e.g., customer models on hyperscaler infrastructure or ring-fenced AI environments in the public cloud).

Within this range of options, businesses will make decisions based on cost, performance, cybersecurity, and regulatory exposure, as well as the extent to which they can secure favorable supplier terms on issues such as data, IP and accountability—especially in the context of AI agents.  

Long-term, the right strategy for “AI sovereignty” will ensure a business can do the following:

  • Maximize opportunity (not just manage risk) with increased control over AI deployment.
  • Remain agile in response to changing business strategy and market opportunity.
  • Withstand external shocks such as sudden regulatory changes or geopolitical disruptions.
  • Future-proof investment in the face of exponential developments in AI technology.

Why the AI superpowers, pioneers and specialists are central to decisions on AI sovereignty

Wherever a business operates globally, the resilience of its AI sovereignty model is dependent on the approaches and actions taken by the leading AI nation states (and the commercial providers operating within them), which can be separated into three groups.

The AI superpowers

The U.S. and China both have comprehensive coverage of the full AI value chain, from energy to use case, but have very different strategies.

  • The U.S. is focused on developing frontier AI models and winning the race to artificial general intelligence, fueled by tremendous investment in data centers and power generation. The federal government’s policy agenda is designed to remove barriers to AI innovation, albeit this has been somewhat undermined by prolific legislative activity and fragmentation at state level. The U.S. intends to use its leadership in AI models to exert influence on the global stage.
  • By contrast, China’s AI policy agenda is based on an acceptance that—without access to the leading AI chips—its AI models will invariably lag the frontier. Instead, the Chinese government is focusing on a twin strategy of:
    • building up its own AI chip capabilities over time (minimizing dependencies on the U.S.); and
    • highly ambitious targets for AI adoption within industry, which it sees as the real engine for economic growth.
  • To achieve this, the Chinese state is legislating extensively and sees regulation as an essential tool to boost adoption. China has an additional advantage in its ability to build new power infrastructure. According to China’s National Energy Administration, since 2021 it has added more power capacity across all energy systems than the U.S. has in its history.

These contrasting approaches are having a significant impact on multinational businesses, many of which increasingly find themselves seeking to integrate both U.S. and Chinese technologies and navigate the resulting web of regulatory controls.

The AI pioneers

Below the two superpowers, recent geopolitical events have cast into sharp relief the sovereign AI strategies of the “AI pioneers” (or “middle powers”, as Canadian Prime Minister Mark Carney describes them). This group includes the UK, the larger EU member states, Canada, South Korea, and India, among others, with the likes of the UAE and Saudi Arabia seeking to join the group. The AI pioneers have coverage across the AI value chain but will need to rely on U.S. or Chinese technology (or more likely both) for the foreseeable future. Their regulatory focus is on AI sovereignty and fostering safe, “human-centered” AI innovation, with measures introduced in some jurisdictions to ease existing regulatory constraints in a bid to encourage development and adoption.

The AI specialists

Below the pioneers are certain nations that have supremacy in a particular part of the value chain. Examples include Taiwan (a leader in chip manufacturing) and Israel (cyber technology).

AI sovereignty across the value chain

The AI value chain describes the interconnected layers of activity required to extract value from AI. Every business will operate at one or more points along the value chain.

Downstream, most businesses will be developing or deploying applications using proprietary data sets, and enhancing capabilities through partnerships or joint ventures. Upstream, infrastructure providers and hardware manufacturers will be looking to rapidly increase capacity and supply chain pathways to meet unceasing demand. This is where the majority of investment is flowing. Ultimately, both ends of the value chain depend on a handful of established foundation model developers.

Each section of the value chain is subject to sector-specific regulation as well as any horizontal regulation applicable to AI. Investment, too, may occur at any point along the chain. The multiple roles occupied by every business along the value chain will dictate the strategic considerations, risks, regulatory requirements, contractual arrangements and governance.

Click here to explore the AI value chain.

The time horizon of AI sovereignty

AI sovereignty is not a static concept. It spans the immediate priorities of today through to the profound implications of artificial general intelligence (AGI) and beyond. A business’s strategy for sovereignty must account for this evolution.

1. Immediate focus: diverse partnerships and self-build

The immediate priorities for businesses are to pursue AI capabilities at speed through: (i) a multi-model, multi-jurisdictional partnership strategy; (ii) self-build capabilities (e.g., to super-charge proprietary data);and (iii) targeted acquisitions and licensing arrangements to obtain critical assets (e.g., IP, people and data).

2. Near-term: complex agent capabilities and sector-wide transformation

AI is moving decisively into the agentic era. The shift from “assistive” AI to agentic systems will transform how businesses operate, and how entire markets function. Multi-agent orchestration allows for sophisticated workflows in which teams of agents can perform increasingly complex tasks with autonomy. To take one example, e-commerce is set to be upended by agentic commerce, an umbrella label referring to the use of AI agents by consumers to search for the right products, make payments and ultimately shop autonomously. Major consumer-facing AI developers are partnering with payment and commerce platforms to build the infrastructure and protocols required to perform these tasks.

3. Medium-term: embedded AI and AI-native applications

AI will move into embedded applications such as robotics and industrial automation, software-defined vehicles and EV ecosystems, and healthcare applications spanning diagnostics and clinical decision support. This raises more questions around safety, national security, and critical infrastructure. In life sciences, the drug discovery lifecycle has long been undergoing its own AI-driven transformation, and embedding AI brings closer the promise of end-to-end automation. Meanwhile, mobile/edge computing and on-device assistants also raise challenges around data localization, jurisdictional exposure and cybersecurity.

4. Long-term: artificial general intelligence and beyond

The trajectory towards AGI (loosely, where AI systems can match or exceed human cognitive capabilities across a broad range of tasks) represents a qualitative shift in the business imperative of AI sovereignty. Current business, societal, and legal frameworks are not designed for generally autonomous AI systems. However, even without a clear timeline, boards should treat potential step-changes in capability as a strategic planning issue.

AI sovereignty supporting business ambitions

AI sovereignty can act as a North Star for a business’s most commercially significant decisions, and by doing so enable, rather than constrain, its strategic ambitions. Here we set out five core business drivers and explain how sovereignty considerations arise for each.

Agentic

As mentioned above, AI has moved into the agentic era. However, the use of AI agents to carry out tasks autonomously amplifies many of the risks that arise with AI models, while also creating new ones. Governance of multi-agent workflows is complex; “human-in-the-loop” is not possible and therefore alternative models of oversight are required, some of which will need to be built into the system itself. Here, sovereignty issues arise when considering accountability for bad outcomes arising from an agent. Accountability is a complex and tool-specific question, with differing positions at law in different countries. In general, AI agents have no separate legal personality so are best viewed as a tool in the hands of, and at the risk of, the deployer. The question then becomes to what extent the deployer can allocate those risks to the AI vendor contractually and otherwise manage the risks through its own system configuration and governance controls. These questions are a long way from being settled and will be influenced by market dynamics as much as anything.

M&A

M&A is a core part of any AI sovereignty strategy, as it enables a business to acquire strategic capability and avoid dependency. Faced with constant regulatory, market, and industry change, M&A decisions must be future-proofed and considered. At the same time, AI opportunities often move faster than traditional diligence techniques allow. AI acquisitions therefore require a strategic, technology-led approach to due diligence—the risk landscape is too vast, the use cases too numerous, and deal timelines too short to adopt any other approach. Diligence workflows must adapt and bring together traditionally siloed experts into a single team that can focus on what really matters in the context of future strategy. Separately, there is increased merger control and foreign investment scrutiny on AI M&A with regulators on the lookout for deals that may reduce consumer choice or increase prices (e.g., algorithmic pricing).

Partnerships

Strategic partnerships are often how businesses pursue AI ambition at speed, as they allow access to best-in-class AI capabilities for a given use case. In this context, a partnership strategy grounded in AI sovereignty should, among other things, aim for diversity across providers and mitigate the delta risk between upstream and downstream contracts. For an individual contract, the focus should be on protecting core data and IP assets, rebalancing asymmetries (e.g., when dealing with large vendors), updating the deal as priorities change, and avoiding vendor lock-in.

Cyber

Cyber risks apply across the entire AI value chain and have become a top priority for most businesses’ AI strategies. New threats (e.g., prompt injection and model poisoning strikes), new attack surfaces and opaque model behavior materially change a business’s cyber threat profile. AI agents compound these risks due to lack of constant human oversight.

Infrastructure and data centers

Decisions made in relation to AI infrastructure are harder to reverse as they embed jurisdictional exposure and can constrain future AI deployment. A sovereignty-first approach ensures that choices about chips, compute, and models enable businesses to scale, localize, and adapt their AI deployment as their priorities evolve.

Workforce redesign

The shift to agentic will spur the most significant workforce transformation of the next decade, impacting business structures, roles and skills, accountability, and operating models. An AI sovereignty lens is critical to building a future-proofed workforce architecture that can manage the employment, industrial relations, and organizational impacts of this shift.

Key questions for boards

Boards overseeing AI strategy should seek assurance from their management teams on the following.

1. Where does our business sit on the AI value chain today, and where do we need to be to deliver our strategy?

2. Which parts of the AI value chain are strategically important to control?

3. Which AI dependencies (e.g., data, models, infrastructure, partners) are acceptable within our risk tolerances?

4. Can we retain real control and optionality over our AI systems, including agents and core data/IP assets as our use of AI evolves over time?

5. How resilient is our AI strategy to regulatory divergence, geopolitical shocks or vendor concentration risk? 

6. Are our AI investments future-proofed to maximize opportunity, not just manage risk?