Generative AI hardware - the other arms race

Published Date
Mar 5, 2024
Generative AI has captivated the world over the past year. But these transformative technologies would not be possible without the specialized hardware and infrastructure that enable the large language models underpinning Generative AI to be trained and operate – and there has been a persistent shortage of chips powerful enough to handle those operations. In this article, we explore three core themes: (i) crucial considerations for corporate governance in handling the constraints facing the AI hardware industry, (ii) the impact of AI regulations and export controls on the supply of AI chips, including the rise of new network models, and (iii) strategies for protecting IP in AI hardware design. 

What are AI chips?

With the release of OpenAI’s ChatGPT in 2022 and the proliferation of large language models in 2023, the demand for Generative AI has skyrocketed and, with it, the demand for computer chips powerful enough to develop and operate AI models.

But AI chips are not your ordinary computer chips, and they’ve been running in increasingly short supply. These chips require greater processing power than general-purpose central processing units (CPUs) and are specially designed to handle the massive amounts of data and calculations that Generative AI algorithms require. They are faster and more energy efficient than CPUs, which is achieved by incorporating greater numbers of smaller transistors, executing calculations simultaneously rather than sequentially, by storing an entire AI algorithm on a single chip, and by using programming languages that optimize AI code execution. While there are different types of AI chips for different tasks, the most widely used are graphics processing units (GPUs), which are most often used for training AI algorithms.

Because of these technical requirements, AI chips are more expensive, more complex, and more difficult to produce than CPUs. The latest generation of Generative AI systems requires state-of-the-art AI chips – older AI chips, with their larger, slower and more power-hungry transistors, incur huge energy consumption costs that quickly balloon to unaffordable levels. The shortage of AI chips has become a major bottleneck for the AI industry, as well as a potential risk factor for investors. For the first time, Microsoft’s annual report in 2023 identified the availability of GPUs as a possible risk factor for investors. This points to the increasing need for management and boards to be even more alive to the risks that may threaten the supply chains of critical hardware. 

This includes gaining a deeper understanding of the current market conditions and the players that dominate. Nvidia’s technology is currently considered the world leader in this generation of AI chip design, built over the past decade, with other chip industry giants such as Intel and AMD moving quickly to catch up. With AI hardware being key for both training and operating AI models, the hardware acts as a bottleneck for both the companies racing to build large language models as well as their customers seeking to run Generative AI applications. At the same time, AI chip manufacturers themselves are remaining focused on their own supply issues as competing AI GPUs are often produced at the same foundries which have their own production limits. 

AI regulations: a double-edged sword 

In addition to understanding the supply chain for Generative AI hardware, companies must also keep on top of legal and geopolitical considerations, which often culminate in the imposition of restrictions by way of regulation. While most countries have focused on regulating the applications of Generative AI – such as chatbots, deepfakes, and synthetic media – rather than on the technology underpinning those applications, the US has focused on managing a different risk: supply. The US has imposed various restrictions on the export of Generative AI chips and chip manufacturing equipment to China in efforts to limit China’s access to AI computing power. Although AI regulations in the Asia Pacific region have also been directed towards AI applications, the manufacturing of Generative AI chips and associated processes may be impacted by the decision to capture AI hardware and chips within the US’s sanction and export controls. These moves have significant implications for the global AI hardware industry, as well as for the users and customers of Generative AI services – particularly in China.

However, these restrictions have not stopped China from pursuing its Generative AI ambitions. If anything, the constraints will force innovation and self-reliance in Generative AI hardware; China has been developing its own domestic AI chip industry, as well as accessing Generative AI computing power via the cloud. To address this cloud loophole, US Congress is now considering further measures to block China from remotely accessing American semiconductors and chips that it cannot purchase under the export controls. In July 2023, Representative Jeff Jackson introduced a bill that would prohibit US persons and subsidiaries from providing support for the remote use or cloud use of any integrated circuits.

Despite the push towards innovation for China, the US sanctions have also caused disruptions and shortages in other sectors that rely on Generative AI chips, such as smartphones and smart cars. For Huawei, one of China’s leading technology companies, the US sanctions have forced it to diversify its Generative AI chip production, but at the expense of the chips it produces for use in smartphones and driver assistance functionality. 

New network models: a way forward

In response to these constraints and challenges in the manufacture of Generative AI chips, we are witnessing the emergence of novel responses by chip manufacturers and customers to the shortages in and constraints on Generative AI chips. These are examples of strategies being used by companies looking to circumvent the growing cost of supply chain and regulatory constraints.

On one level, we are seeing the rise of AI “Infrastructure-as-a-Service” models, where Generative AI hardware is virtualized and accessed in a manner similar to ordinary cloud-based computing offerings. With the constraints in AI hardware, access to and availability of AI chips are critical to developing these capabilities and so chip manufacturers are increasingly partnering directly with cloud service providers to allow for commercializing capacity rather than providing chips to OEMs only. This allows chip manufacturers to access a different strategy to deliver to more end users without incurring much risk, and it also allows more end users to access the GPU computing power without suffering the uncertainties of delayed hardware manufacture and supply.

We also see the complementary rise of “edge AI”, where AI chips are deployed directly on local edge devices, like sensors or internet-of-things devices, rather than on centralized servers in data centers – as this reduces latency and bandwidth consumption, permitting edge systems to run Generative AI applications efficiently, even with less advanced AI hardware.

These evolving models can, however, give rise to challenging legal issues for companies engaging in Generative AI cloud computing or edge AI.

For Generative AI cloud computing, we are seeing business models respond to chip manufacturers having significantly more commercial leverage in AI-focused data centers or cloud computing projects. We are also seeing regulators engage with the scope of export restrictions for Generative AI hardware – including whether to extend it to cloud models – as this will be front of mind for service providers and their customers further down the service chain. For edge AI, industries are having to contend with additional regulations – for example, car companies developing smart cars or autonomous vehicles must ensure compliance with privacy, data processing, and telecommunications regulations, and this may extend to AI regulations depending on the adoption of Generative AI models and hardware.

Protecting trade secrets

With the lack of supply, leading technology hardware companies are seeking to develop new technologies for increasingly energy-efficient AI hardware. While the classic means of protecting intellectual property involves the use of patents, patent protection requires the sharing of the patentable design, which discloses the innovation to the world. Trade secret rules, by contrast, are more useful for preserving the secrecy of valuable IP in the fast-moving evolution of AI chips.

Typically, for information to qualify as a trade secret, it must: (a) not be generally known, (b) have had reasonable steps taken to ensure the information remains secret, and (c) have actual or potential value arising from the fact that it is not generally known. So long as the test is met, trade secrets can cover all kinds of information, and that information does not need to be novel or unique, as is the case with patents. For example, the recipe for Coca-Cola and the make-up of WD40 are both trade secrets. The protection provided by trade secrets can potentially last indefinitely without the need to file an application or seek approval. Unlike patents, however, owners of trade secrets have no defensive rights – they do not have the right to exclude others from the trade secret and can only prevent others from misappropriating the trade secret.

The major advantage of relying on trade secrets as the manner of protection for IP in AI chips is that it keeps one’s competitors in the dark for as long as that information remains secret. And, in an area such as AI, where what is known and being discovered is evolving so rapidly, keeping new technology a secret can be immensely valuable. 


Although AI development and applications are exploding, the constraints in the manufacture and supply of the underlying hardware and the imposition of regulatory restrictions pose challenges for the industry in accessing sufficient computing power to run Generative AI. Companies that are incorporating the use of Generative AI in their businesses must remain alert to the supply chain and geopolitical considerations affecting Generative AI chip supply. In response, companies may look to new computing and networking models, such as AI “Infrastructure-as-a-Service” models and “edge AI”, which are beginning to gain ground. In doing so, however, they must address novel legal issues. Companies looking to develop new IP in Generative AI chip design to meet the increasing demands must also seek the appropriate protection. As Generative AI is integrated more and more into our lives, the stakes for the procurement of AI-compatible hardware will only increase. It is essential for companies to stay informed, proactive, and adaptable in this dynamic environment. 

Content Disclaimer
This content was originally published by Allen & Overy before the A&O Shearman merger