I. AI’s roles in healthcare
AI is revolutionizing healthcare by enhancing various aspects such as drug development, disease diagnosis, treatment, patient monitoring, and administrative tasks. Notable examples include Google’s Med-PaLM, Stanford’s CheXNet, and NVIDIA’s partnership with Hippocratic AI. In addition to the advancements by the private sector, the World Health Organization (WHO) launched S.A.R.A.H. (Smart AI Resource Assistant for Health) in April 2024. This digital health promoter prototype, powered by generative AI, features enhanced empathetic responses in eight languages.
Looking ahead, we can expect a growing trend of collaboration among healthcare companies, technology firms, and research institutions. This synergy will drive further innovations and improvements in healthcare delivery and patient outcomes.
II. Legal frameworks governing AI in healthcare
Regulating AI in healthcare is an intricate task that involves striking a balance between fostering scientific innovation and protecting human rights and safety. Different countries may adopt various approaches to AI regulation, reflecting their unique values and priorities. For instance, jurisdictions such as the European Union (EU), Japan, South Korea, and China have AI-specific laws, while others, including the UK, U.S., and Australia, are applying existing technology-neutral laws to AI2. These diverging regulatory approaches result in significant compliance burdens for companies deploying and building AI.
We believe that effective regulation of AI in health requires international collaboration. By working together, countries can create a cohesive framework that enhances human welfare on a global scale. This collaborative effort can help ensure that AI technologies are used safely and ethically, while also promoting innovation and protecting human rights.
a. Overview of AI legal frameworks
i. Current AI legal frameworks
International organizations and governments are actively engaging with stakeholders to develop regulations and industry standards. Currently, most of these guidelines are principle-based, focusing on the fair and equitable use of AI. For instance,
- The WHO has published various guidelines on AI in healthcare, emphasizing ethical considerations and best practices. These guidelines stress the importance of designing and using AI systems in ways that respect patient privacy, promote equity, and mitigate biases.
- In 2024, the Organization for Economic Cooperation and Development (OECD) updated its AI Principles, marking the first intergovernmental standard on AI. These principles aim to balance innovation, human rights, and democratic values.
From the perspective of legislation by sovereign states, the legal landscape for AI in healthcare is still in its infancy and continues to evolve. Many countries are currently relying on existing technology-neutral laws, such as data protection and equality laws, as well as industry standards, to address AI-related matters. Additionally, some nations are taking proactive steps to develop approaches to address issues arising from AI technologies.
- In the United States, the Food and Drug Administration (FDA) has recently issued several discussion papers on AI drug development and manufacturing medical devices and guidance on decentralized clinical trials.3 FDA generally supports the use of AI in healthcare development and has already reviewed and authorized over 1200 AI/Machine Learning (ML)-enabled medical devices.4 In addition, the Center for Drug Evaluation and Research (CDER) of the FDA has established the Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative to support the adoption of advanced manufacturing technologies that could bring benefits to patients.
- In the EU, the AI Act is recognized as the world’s first comprehensive AI law. Although most of its requirements will only come into effect from August 1, 2026, and pure research and development AI is excluded much of its scope, the Act imposes regulatory requirements on AI systems based on four risk categories: (1) prohibited AI, (2) high risk AI, (3) AI triggering transparency requirements, and (4) general-purpose AI. In the context of healthcare, the middle two categories— “high risk AI” and “AI triggering transparency requirements”—are likely to be the most relevant. These categories will impose specific regulatory obligations to ensure the safe and ethical use of AI in healthcare applications.
- We are also increasingly seeing healthcare companies using general purpose AI models (“GPAIM”) for many hundreds of different use cases, across R&D and corporate functions. This is typically by way of customizing large language models using proprietary data. As such, the industry has been calling out for clarification regarding the extent to which such bespoke deployment of GPAIM will engage the specific EU AI Act obligations (applying from 2 August 2025). Whilst the EU Commission’s guidelines5, published in July 2025, offer some insight as to the compute threshold at which downstream modification constitutes the creation of a new model (with that downstream modifier then becoming a “provider” of the GPAIM and therefore subject to extensive compliance requirements), simple numerical thresholds do not necessarily tell the whole story. There are many different techniques for customizing general purpose AI models, and a simple compute threshold will not capture some customization techniques that are likely to have a more significant impact on model behavior, such as system prompts. Careful case-by-case consideration of the modification in practice will be necessary. Organizations at risk of falling within scope of the EU AI Act GPAI requirements should consider the relevance of the General Purpose AI Code of Practice (the GPAI Code)6. The GPAI Code, while non-binding, has been developed collaboratively under the leadership of the European AI Office and is intended to be a practical tool to support organizations in complying with the AI Act for GPAI models, addressing transparency, copyright and safety and security in particular. The drafting process sparked significant debate among stakeholders: some arguing that the GPAI Code is overly restrictive with calls for greater flexibility, particularly regarding the training of LLMs. However, the European Commission asserts that signatories will benefit from a “simple and transparent way to demonstrate compliance with the AI Act”, with enforcement expected to be focused on monitoring their adherence to the GPAI Code. It remains to be seen how organizations manage that adherence, particularly, for example, in the face of technical challenges (such as output filtering) and legal complexities (not least due to the interplay with ongoing court action) and the allocation of liability between provider and deployer.
- Unlike the EU, the UK has, to date, chosen not to pass any AI-specific laws. Instead, it encourages regulators to first determine how existing technology-neutral legislation, such as the Medical Device Regulations, the UK GPDR, the Data Protection Act, can be applied to AI uses. For example, the Medicines & Healthcare products Regulatory Agency (MHRA) is actively working to extend existing software regulations to encompass “AI as a Medical Device” (or AIaMD). The MHRA’s new program focuses on ensuring both explainability and interpretability of AI systems as well as managing the retraining of AI models to maintain their effectiveness and safety over time.
- In China, the National Health Commission and the National Medical Products Administration recently published several guidelines on the registration of AI-driven medical devices and the permissible use cases of applying AI in diagnosis, treatment, public health, medical education, and administration. The guidelines all emphasize AI’s assisting roles in drug and medical device development and monitoring under human supervision.
Leading AI developers are also setting up in-house AI ethics policies and processes, including independent ethics board and review committee, to ensure safe and ethical in AI research. These frameworks are crucial while the international landscape of legally binding regulations continues to mature.
ii. Recommendations: scenario-based assessments for AI tools
Healthcare companies face a delicate balancing act. On one hand, their license to operate depends on maintaining the trust of patients, which requires prioritizing safety above all else. Ensuring that patients feel secure is non-negotiable in a sector where lives are at stake. On the other hand, being overly risk-averse can stifle the very innovations that have the potential to transform lives and deliver better outcomes for patients and society as a whole. Striking this balance is critical: rigorous testing and review processes must coexist with a commitment to fostering innovation, ensuring progress without compromising safety.
In this regard, a risk-based framework is recommended for regulating AI in healthcare. This approach involves varying the approval processes based on the risk level of each application. Essentially, the higher the risks associated with the AI tools, the more controls and safeguards should be required by authorities. For instance, AI tools that conduct medical training, promote disease awareness, and perform medical automation should generally be considered low risk. Conversely, AI tools that perform autonomous surgery and critical monitoring may be regarded as higher risk and require greater transparency and scrutiny. By tailoring the regulatory requirements to the specific risks, we can foster innovation while ensuring that safety is adequately protected.
Moreover, teams reviewing AI systems should consist of stakeholders representing a broad range of expertise and disciplines to ensure comprehensive oversight. For example, this may include professionals with backgrounds in healthcare, medical technology, legal and compliance, cybersecurity, ethics and other relevant fields as well as patient interest groups. By bringing together diverse perspectives, the complexities and ethical considerations of AI in healthcare can be better addressed, fostering trust and accountability.
b. Data protection and privacy
Data privacy requirements are a key consideration when using AI in healthcare contexts, especially given that many jurisdictions’ laws broadly define “personal data,” potentially capturing a wide range of data. Further, privacy regulators have been the forerunners in bringing AI-related enforcement actions. For example, AI tools such as OpenAI’s ChatGPT have encountered extensive regulatory scrutiny at EU level through the European Data Protection Board (EDPB) taskforce and NOYB (None of your Business)/the European Center for Digital Rights, the data privacy campaign group founded by Max Schrems, the well-known privacy activist, has initiated a complaint against the company in Austria, alleging GDPR breach. DeepSeek has also attracted immediate attention from EU and other international regulators, with investigations initiated and the EDPB taskforce extended to cover its offerings.
Privacy considerations in AI
There are several privacy considerations to navigate when using AI. This can raise challenges as, often U.S. based, developers look to navigate highly regulated jurisdictions such as those in the EU, where regulators are scrutinising approaches taken to data protection compliance. This includes the issue of identifying a lawful basis for the processing activity. Many jurisdictions’ data privacy laws contain a legitimate interests basis or similar provisions which, when applicable, permit the data controller to process personal data without first requiring individuals’ explicit consent. However, there are diverging views on whether this basis can be used for AI-related processing.
The European Data Protection Board (EDPB) issued an Opinion 28/20247 in December 2024, which provides detailed guidance on the use of legitimate interest as a legal basis for processing personal data in the development and deployment of AI models, including LLMs (the EDPB AI Opinion). The EDPB AI Opinion, although indicating that legitimate interest may be a possible legal basis, emphasizes the need for thorough and balancing and necessity test, necessity, and robust safeguards to protect data subjects’ rights. The examples where legitimate interests could be a suitable lawful basis in the EDPB AI Opinion are relatively limited, including examples such as a conversational agent, fraud detection and threat analysis in an information system. An EDPB Opinion adopted a few months earlier, in October 2024, which addresses the legitimate interests basis for processing of personal data more generally (the EDPB LI Opinion), while helpful in referencing scientific research as a potential legitimate interest, is cautious about establishing a legitimate interest on the basis of societal benefit, emphasising that the legitimate interest should tie to the interest of the controller or third party and that processing should be “strictly” necessary to achieve the legitimate interest (i.e. there is no other reasonable and equally effective method which is less privacy intrusive). The EDPB AI Opinion clarifies that the unlawful processing of personal data during the development phase may not automatically render subsequent processing in the deployment phase unlawful, but controllers must be able to demonstrate compliance and accountability throughout the lifecycle of the AI system.
i. Individual consent
As an alternative, businesses may need to obtain individual consent for AI-related processing activities. While this can be a difficult basis to use given the high bar for valid consent, it can be particularly challenging in an AI healthcare context given the heightened compliance obligations that apply to special category data (which includes health data), raising the requirement for consent to “explicit consent”combined with the potential for public distrust and misunderstanding around AI technologies. Further, in some jurisdictions it is common for individuals to place stringent conditions, including time restrictions, on what their personal data can be used for. This could prevent their personal data being used in connection with AI, given it is not always possible to delete or amend personal data once it has been ingested into an AI system.
c. Professional accountability
Determining fault when an AI system makes an error is a particularly complex issue, especially given the number of parties that may be involved throughout the value chain. The challenge is heightened by the fact that different regulations may apply at different stages, and the legal landscape is still developing in response to these new technologies.
In the case of fully autonomous AI decision-making, one possible approach is that liability could fall on the AI developer, as it may be difficult to hold a human user responsible for outcomes they do not control. However, the allocation of responsibility could vary depending on the specific circumstances and regulatory frameworks in place.
Where AI systems operate with human involvement, another potential approach is for regulators to introduce a strict liability standard for consequences arising from the use of AI tools. While this could offer greater protection for patients, it may also have implications for the pace of technological innovation. Alternatively, some have suggested that requiring AI developers and commercial users to carry insurance against product liability claims could help address these risks. The WHO, for example, has recommended the establishment of no-fault, no-liability compensation funds as a way to ensure that patients are compensated for harm without the need to prove fault.8
In July 2025, a study, commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs, was published9. Its aim was to critically analyse the EU’s evolving approach to regulating civil liability for AI systems, four policy proposals are discussed and the report advocated for a strict liability regime targeting high-risk AI systems.
Ultimately, the question of legal responsibility for AI in healthcare remains unsettled and is likely to require ongoing adaptation as technology and regulation evolve. Accountability will be a particular challenge given the complexity of the value chain and the interplay of different regulatory regimes. It will be important for all stakeholders to engage in continued dialogue to ensure that legal frameworks keep pace with technological developments and that patient safety remains a central focus.
III. Ethical concerns
There are multiple ethical considerations that developers and deployers may need to address when using AI systems in healthcare. Three prominent examples are explored below.
a. Bias causing unjust discrimination
Bias in AI systems can lead to unjustified discriminatory treatment of certain protected groups. There are two primary types of bias that may arise in healthcare:
- Disparate impact risk: This occurs when people are treated differently when they should be treated the same. For example, a study10 found that Black patients in the U.S. health care system were assigned significantly lower “risk scores” than White patients with similar medical conditions. This discrepancy arose because the algorithm used each patient’s annual cost of care as a proxy for determining the complexity of their medical condition(s). However, less money is spent on Black patients due to various factors including systemic racism, lower rates of insurance, and poorer access to care.11 Consequently, using care costs created unjustified discrepancies for Black patients.
- Improper treatment risk: Bias in AI systems can arise when training data fails to account for the diversity of patient populations, leading to suboptimal or harmful outcomes. For example, one study12 demonstrated that facial recognition algorithms often exhibit higher error rates when identifying individuals with darker skin tones. While this study focused on facial recognition, the same principle applies in healthcare, where AI systems used for dermatological diagnoses have been found to perform less accurately on patients with darker skin.13 This occurs because the datasets used to train these systems often contain a disproportionate number of images from lighter-skinned individuals. Such biases can lead to misdiagnoses or delays in treatment, illustrating the critical need for diverse and representative training data in healthcare AI applications.
b. Transparency and explainability
Providing individuals with information about how healthcare decisions are made, the process used to reach that decision, and the factors considered is crucial for maintaining trust between medical professionals and their patients. Understanding the reasoning behind certain decisions is not only important for ensuring high-quality healthcare and patient safety, but also helps facilitate patients’ medical and bodily autonomy over their treatment. However, explainability can be particularly challenging for AI systems, especially generative AI, as their “black box” nature means deployers may not always be able to identify exactly how an AI system produced its output. It is hoped that technological advances, including recent work on neural network interpretability,14 will assist with practical solutions to this challenge.
c. Human review
To facilitate fair, high-quality outcomes, it is important for end-users—often healthcare professionals—to understand the AI system’s intended role in their clinical workflow and whether the AI system is intended to replace user decision-making or augment it.
However, it may not always be appropriate for the human to override the AI system’s output; their involvement in the workflow will likely vary depending on what the AI tool is being used for. For example, if an AI system has been trained to detect potentially cancerous cells in skin cell samples, and the AI system flags the sample as being potentially cancerous but the healthcare professional disagrees, it may be more appropriate to escalate the test to a second-level review than to permit the healthcare professional to simply override the AI system’s decision. A false positive here is likely to be less risky than a false negative. It is therefore important to take a considered, nuanced approach when determining how any human-in-the-loop process flow should operate.
IV. Conclusion
AI offers significant benefits in healthcare but also presents legal and ethical challenges that must be navigated. Collaborative efforts among policymakers, healthcare professionals, AI developers, and legal experts are essential to establish robust frameworks that safeguard patient rights and promote equitable access to advanced healthcare technologies.
* * *
Abstract: The article explores the transformative potential of AI in healthcare, highlighting its benefits in drug development, diagnosis, and patient care. It underscores the necessity for robust regulatory frameworks to address safety, privacy, and bias concerns. Various international and national regulatory approaches are discussed, including the EU’s AI Act and the U.S. FDA’s guidelines. Ethical issues such as bias, transparency, and professional accountability are examined. The article advocates for international collaboration and scenario-based assessments to ensure AI’s safe and ethical deployment in healthcare.
Contribution:
David Egan, Assistant General Counsel, Global Digital and Privacy, GSK (London)
Jieni Ji, Of Counsel, A&O Shearman (Hong Kong/Shanghai)
Footnotes
[1] K. Savchuk. “AI Will Be as Common in Healthcare as the Stethoscope.” May 15, 2024. gsb.stanford.edu/insights/ai-will-be-common-healthcare-stethoscope.
[2] In the U.S., there is no comprehensive federal legislation that regulates the development of AI to date. The White House recently released the U.S. AI Action Plan, which directs various U.S. agencies to take steps to invest in and enable vastly greater AI infrastructure in the U.S., foster AI innovation, and export U.S. AI innovation internationally while protecting U.S. trade secrets. The federal government is also seeking to quell AI regulation at U.S. State level, however there are many hundreds of AI-focused regulations that have been enacted or proposed across U.S. states, resulting in a fragmented landscape.
[3] The U.S. FDA. “Conducting Clinical Trials Decentralized Elements Guidance for Industry, Investigators, and Other Interested Parties.” September 2024. https://www.fda.gov/media/167696/download.
[4] The U.S. FDA. “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices.” July 10, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.
[5] General Purpose AI Guidelines, July 18, 2025. https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act
[6] General Purpose AI Code of Practice, July 10, 2025 https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai
[7] EDPB Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models. https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en.
[8] WHO. “Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.” 2024.
[9] European Parliament. “State of Play of Academic Freedom in the EU Member States.” Study, Directorate-General for Internal Policies, Policy Department for Citizens’ Rights and Constitutional Affairs, 2025. https://www.europarl.europa.eu/RegData/etudes/STUD/2025/776426/IUST_STU(2025)776426_EN.pdf.
[10] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. “Dissecting racial bias in an algorithm used to manage the health of populations.” Science, 366(6464), 447-453 (2019). https://www.science.org/doi/10.1126/science.aax2342
[11] Hoffman, K.M., Trawalter, S., Axt, J.R., & Oliver, M.N. “Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites.” Proceedings of the National Academy of Sciences, 113(16), 4296-4301 (2016). pmc.ncbi.nlm.nih.gov/articles/PMC4638275 and www.pnas.org/doi/10.1073/pnas.1516047113.
[12] Buolamwini, J., & Gebru, T., “Gender shades: Intersectional accuracy disparities in commercial gender classification”, Proceedings of Machine Learning Research, 81, 1–15 (2018). https://www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification/.
[13] Melanoma Research Alliance. “Making AI Work for People of Color: Diagnosing Melanoma and Other Skin Cancers.: Melanoma Research Alliance. 2022. https://www.curemelanoma.org/blog/article/making-ai-work-for-people-of-color-diagnosing-melanoma-and-other-skin-cancers.
[14] Shaham T., Schwettmann S., Wang F., et al. “A Multimodal Automated Interpretability Agent.” Forty-first International Conference on Machine Learning. 2024. arxiv.org/pdf/2404.14394.