Opinion

Zooming in on AI #15: Regulatory spaghetti and AI – how to make sense of the EU GDPR and the EU AI Act

Zooming in on AI #15: Regulatory spaghetti and AI how to make sense of the EU GDPR and the EU AI Act
Read Time
7 mins
Published Date
Feb 3 2025

Given the rapid speed of development in the field of AI, it is increasingly important that businesses develop effective governance to address the regulatory framework governing the development, training, use and deployment of AI.

As the Regulation (EU) 2016/679 (General Data Protection Regulation) (the EU GDPR) has been in effect since 2018, businesses now have the opportunity to consider where they can leverage their existing data protection governance to support compliance with EU Regulation 2024/1689, also known as the Artificial Intelligence Act (the EU AI Act)1 . They will also need to understand where important differences exist and how to approach new or updated compliance measures.  In this blog we set out the key issues to consider and how to effectively navigate the two regimes together.

EU GDPR v EU AI Act – how do they differ? 

The EU GDPR has become a fact of modern-day business and has been widely copied into legislation around the world.  By contrast the EU AI Act has generally not inspired many lookalike laws, but nonetheless resonates with high level principles articulated in international instruments such as the OECD AI principles and the Council of Europe’s Framework Convention on Artificial Intelligence.

The integral difference between the EU GDPR and the EU AI Act lies in the fact that the EU AI Act is a product safety law that is specifically concerned with the safe development, deployment and use of AI systems, whereas the EU GDPR, by comparison, is a much broader fundamental rights law that enshrines individuals’ rights regarding the processing of their personal data. 

Before we turn to consider the areas of overlap and where existing compliance mechanisms can be leveraged, it is perhaps worth zooming out to take a bird’s eye view of the overall landscape.

Both the EU GDPR and the EU AI Act aim in general to be technologically neutral.  As a starting point, the EU GDPR applies to all personal data processing within both its jurisdictional and material scope irrespective of the risk profile of that data.  As such, personal data that may be thought to be relatively low risk – for instance employees’ work contact data, is subject to the same protections as sensitive health data, but the application of those protections will be different. The underlying ethos of the EU GDPR is very much to consider the context and to apply tests of necessity, and to consider what is appropriate.  More stringent security measures will, for instance, inevitably be more appropriate to higher risk data than anodyne data that is unlikely to cause harm.

While the principles behind the EU AI Act are similar to those of EU GDPR – in that it has always been presented as being risk orientated, the legislative approach is different, such that the rules that it enshrines are applied only to certain activities and certain use cases.   As is well known, the EU AI Act considers four different risk groups: (1) prohibited AI practices; (2) high risk; (3) limited risk; and (4) minimal risk AI systems.  It also contains provisions on General Purpose AI (GPIAs). You can read further on our take of the obligations for high risk AI systems, our blog on limited risk AI systems, and our blog on GPIAs.

Given that many AI use cases involve the processing of data, a company may be caught in different ways under the two regimes. For example:

  • A company may be a subject to the EU GDPR in respect of its operation of an AI system as a controller where that AI system involves the processing of personal data in its EU establishment and the company is directing the key elements of the processing activities.  The full panoply of the EU GDPR (applied proportionately), will arise, but it may be classed as a limited risk AI system under the EU AI Act (e.g. an AI chatbot assistant on a retail site).
  • If a company develops an AI system which does not process personal data (e.g. road traffic system monitoring), this use of data will not be subject to the EU GDPR at all, but the company will still be a provider of a high-risk AI system under the EU AI Act.
  • Where a company provides a recruitment system service that uses AI to make decisions, it is likely to involve the processing of personal data and therefore be subject to the EU GDPR, perhaps as a processor rather than a controller, if it is doing so on behalf of another company.  It may also qualify as a provider of a high-risk system under the EU AI Act.

A company which makes the key decisions in respect of an implemented biometric identification system that it is operating is likely to be a controller for that AI system under the EU GDPR and the deployer of a high-risk system under the EU AI Act. Businesses will therefore need to consider the overlap and differences in the context of their particular uses of AI. 

Leveraging compliance frameworks

Despite their differences, both the EU AI Act and the EU GDPR are focused on ensuring the responsible and ethical use of technology, and to some extent there are compliance duties which could be read across from one piece of law to the other. In particular, businesses can look to harmonise their compliance efforts under both frameworks in areas such as transparency, technical and operational measures and governance.   This requires careful mapping, but we have set out a snapshot below of some areas of overlap that companies can consider tackling together.

Transparency 

Compliance framework Article Requirement
EU GDPR
13 and 14

Right to be informed

Controllers must inform individuals about the collection and use of their personal data, including by:

  • Providing details such as the identity and contact information of the controller
  • Explaining the purposes and legal basis for data processing
  • Disclosing any recipients of the personal data o Informing individuals about the data retention period
  • Notifying individuals if automated decision making, including profiling, is involved in how their data will be processed
EU GDPR
 
22

Right against automated decision-making (Article 22)

  • Individuals have the right not to be subject to decisions based solely on automated processing, including profiling, unless specific conditions are met.
  • If automated decision-making is used, controllers must:
  • provide meaningful information about the logic involved, and
  • explain the significance and consequences of such processing to the individual.
  • provide meaningful information about the logic involved, and
  • explain the significance and consequences of such processing to the individual.
EU AI Act
 
50

Transparency obligations

The obligations (additional to those under national or Union law) require providers and deployers of certain AI systems to make various disclosures including:

  • Informing users at the first interaction that they are engaging with an AI system.
  • Clearly labelling text which is published for the purpose of informing the public on matters of public interest if it is artificially created or modified.
  • Identifying outputs from AI systems generating synthetic content as artificially generated or manipulated.
EU AI Act
 
13
 

User Information Requirements

  • Transparent information is not just to be provided to individuals but also to entities using AI systems, e.g. a company that uses a system made available by a developer (Article 13).
  • Greater focus on technical information that should be provided to users (individuals and entities).
EU AI Act
 
86

Decision-making transparency (Article 86)

  • Any persons affected by decisions made by a deployer using high-risk AI systems can request clear explanations about the decision-making process and key elements of the decision made.
  • Applicable to the extent it is not already covered under Union law, and reflects the EU GDPR's emphasis on transparency in decision-making.