Context
The ICO highlights that the AIBS should be considered in the context of recent commitments to the Government on supporting economic growth, policy positions on generative AI and regulatory actions such as intervening with Snap’s AI chatbot and ordering Serco Leisure to stop using biometric technology to monitor its employees.
The wider strategic context is also driven by the ICO 25 Strategic plan that set out the direction and priorities for the ICO under Information Commissioner John Edwards. The ICO 25 plan was centred around objectives on empowering responsible innovation and sustainable economic growth, alongside safeguarding and empower people – including addressing impact of technology on vulnerable groups.
The ICO’s longstanding approach is one of “outcome based regulation”, which can be characterised as regulatory strategy and policy that focuses on the results of an activity or process, rather than the specific rules or procedures used to achieve those results. It emphasises what is being delivered and holds organisations accountable for achieving the desired results. Such an approach is also risk based, focused on areas of greatest potential harm to the public and the benefits of providing regulatory certainty. The approach will also involve a focus on cooperative regulation for those entities willing to engage and enforcement action for the most wilful, egregious and negligent breaches of the law. Other UK regulators, such as the Financial Conduct Authority, also take a similar approach.
Key priorities
The ICO’s AIBS reflects its outcome driven approach and to that end it is a practical document that makes clear where the data protection regulator sees the greatest areas of risk. It sends a clear signal to organisations on where they should focus governance, mitigations and compliance activities. The AIBS also sets out how the ICO will support organisations and when they may take action.
The AIBS seeks to address two key challenges, based on evidence and research the ICO has undertaken:
- private and public sector organisations can lack the regulatory certainty and confidence to invest in and use AI and biometric technologies compliantly; and
- a lack of transparency and confidence about how personal information is used in these technologies can undermine public trust.
The AIBS focuses on three key areas of risk:
- the development of foundation models — large-scale models trained on vast datasets and adaptable to a wide range of downstream tasks;
- the use of automated decision-making in recruitment and public services; and
- the use of facial recognition technology (FRT) by police forces.
The AIBS also targets the following areas of GDPR compliance:
- transparency and explainability,
- bias and discrimination, and
- rights and redress.
The AIBS stresses the importance of using AI and biometrics responsibly, ensuring high standards of automated decision making (ADM), fairness and accountability, protecting personal data and preventing harm, applying proportionality in the deployment of facial recognition technology in policing.
The action plan that closes the AIBS then sets out the key activities the ICO will take during 2025/2026. The tasks set out are structured around a range of regulatory actions, providing examples of how the ICO will deliver end-to-end regulation to deliver the objectives. The key actions are as follows:
- Guidance on ADM and a new statutory code of practice on AI and ADM. The former will be available by autumn 2025. The guidance and code will also reflect the changes to the Article 22 GDPR ADM provisions that are currently in the Data Use and Access Bill (now finalised and set to become law in the next few months). New guidance on policing and FRT will also be published (this will presumably build on the Opinion published by the ICO in 2019).
- Regulatory engagement with central government to understand ADM implementation and setting out regulatory expectations.
- Scrutiny of major employers and recruitment platforms regarding use of ADM in recruitment, particularly regarding transparency, discrimination and redress. Publishing findings and taking action where required.
- Secure assurances from developers of foundation models on how personal data is protected and setting out regulatory expectations.
- Auditing of police use of facial recognition technology.
• Policy recommendations on where the law may need change in the context of FRT
- Taking action when necessary to address unlawful uses of AI and biometric technologies.
- Industry engagement on data protection implications of agentic AI with associated report and consultation.
Implications for organisations covered by UK GDPR
Organisations who play a key role in the AI supply chain or the deployment of facial recognition technologies should be prepared for regulatory engagement with the ICO. They will need to be ready to disclose information about the governance in place to address compliance, including data protection impact assessments and legitimate interest assessments. Organisations should consider how they will react and evolve their approach in light of ICO feedback. If the ICO sees substantive evidence of improved outcomes formal actions may not be used. It is notable that formal audits are only slated for police forces.
The guidance and code on AI and ADM will be pivotal documents in advising organisations on the steps they need to take to comply with GDPR. The ICO will run consultation processes and organisations should take the opportunity to feed evidence of practical challenges and where further guidance is needed. For example, the generative AI consultation response from the ICO in 2024 provided some valuable guidance but left some open questions. The grey areas included issues such as the use of special category data, when joint controller arrangements may be needed in the AI supply chain and how far transparency measures will need to go when model developers use personal data from third party sources.
There is explicit reference to uses of special category data in foundation models in the AIBS, indicating that the ICO may closely examine compliance with Article 9 GDPR and the additional lawful basis that is needed to process this type of data. Compliance is particularly challenging when the data is gathered incidentally, as part of the dataset used to train the foundation model (in comparison to when the data is knowingly used e.g., in models used in the health sector). At present it is a matter of debate as to which Article 9 condition might apply to incidental uses in foundation model development. It would be helpful for the ICO to clarify this for controllers and to approach Article 9 compliance, particularly in so far as incidental use of special category data is concerned, with its outcome based methodology to the fore.
The AIBS makes clear that the ICO will examine different parts of the AI and biometrics supply chain – covering both development and deployment. The focus on the latter will engage a large pool of organisations, for example in relation to use of AI in recruitment.
It is also notable that the ICO has sought to focus on uses of the FRT in the policing sector rather than uses on the public spaces more broadly, including by commercial organisations. This may provide some indication that the ICO regards uses of the FRT by the State as posing a higher degree of risk. Though this does not preclude them from acting against commercial deployments.
Conclusion
The AIBS delivered by the ICO provides welcome clarity on regulatory priorities and risks, which will enable organisations to plan and invest in relevant areas of governance. There is strong evidence to support the areas of priority and the ICO has an effective plan to provide advice and guidance to support organisations. Organisations should take up the forthcoming opportunities for consultation and engagement to demonstrate how governance and compliance operate in practice.