Ethical Implications of the Use of Artificial Intelligence (AI) Based Technologies for Medical Image Classification Systems in Screening: A Qualitative Systematic Review
Author(s)
Melina Vasileiou, MSc, Victoria Wakefield, MBChB, Clare Dadswell, PhD, Steve Edwards, DPhil.
BMJ Technology Assessment Group, London, United Kingdom.
BMJ Technology Assessment Group, London, United Kingdom.
OBJECTIVES: This review was designed to address the question: What are the ethical implications of the use of AI-based technologies for medical image classification systems in screening?
METHODS: A systematic search was carried out across qualitative literature published between June 2020 and September 2024, using databases: MEDLINE, Embase, PsychINFO, and CINHAL. The review focused on primary qualitative studies examining healthcare professionals’, patients’ and other stakeholders’ perspectives on AI use in screening. Thematic analysis was conducted, and confidence in the evidence was assessed using the GRADE-CERQual framework.
RESULTS: Fourteen qualitative studies met the inclusion criteria, capturing views from clinicians, radiologists, AI developers, policymakers and patients. Key ethical concerns raised included: (1) the need for human oversight to validate AI’s diagnostic recommendations; (2) uncertainty regarding liability when AI errors occur; (3) risks of algorithmic bias resulting from discrepancies between training datasets and real-world populations; (4) issues around data privacy, cybersecurity and informed consent; (5) the importance of transparent decision-making to develop trust; and (6) concerns about healthcare professionals becoming deskilled as AI takes on a greater role. While AI was regarded as a valuable tool to support clinical decision-making, stakeholders emphasised that its use must be guided by ethical frameworks to build public trust and maintain patient safety.
CONCLUSIONS: This review identified the key ethical challenges that must be addressed to ensure the responsible adoption of AI in medical screening. To ensure safe and effective integration, policymakers, healthcare institutions, and developers should prioritise human oversight, adopt clear regulatory policies and strategies to mitigate bias and ensure transparency. Further research is needed to explore condition-specific ethical challenges and the long-term ethical implications of AI integration.
METHODS: A systematic search was carried out across qualitative literature published between June 2020 and September 2024, using databases: MEDLINE, Embase, PsychINFO, and CINHAL. The review focused on primary qualitative studies examining healthcare professionals’, patients’ and other stakeholders’ perspectives on AI use in screening. Thematic analysis was conducted, and confidence in the evidence was assessed using the GRADE-CERQual framework.
RESULTS: Fourteen qualitative studies met the inclusion criteria, capturing views from clinicians, radiologists, AI developers, policymakers and patients. Key ethical concerns raised included: (1) the need for human oversight to validate AI’s diagnostic recommendations; (2) uncertainty regarding liability when AI errors occur; (3) risks of algorithmic bias resulting from discrepancies between training datasets and real-world populations; (4) issues around data privacy, cybersecurity and informed consent; (5) the importance of transparent decision-making to develop trust; and (6) concerns about healthcare professionals becoming deskilled as AI takes on a greater role. While AI was regarded as a valuable tool to support clinical decision-making, stakeholders emphasised that its use must be guided by ethical frameworks to build public trust and maintain patient safety.
CONCLUSIONS: This review identified the key ethical challenges that must be addressed to ensure the responsible adoption of AI in medical screening. To ensure safe and effective integration, policymakers, healthcare institutions, and developers should prioritise human oversight, adopt clear regulatory policies and strategies to mitigate bias and ensure transparency. Further research is needed to explore condition-specific ethical challenges and the long-term ethical implications of AI integration.
Conference/Value in Health Info
2025-11, ISPOR Europe 2025, Glasgow, Scotland
Value in Health, Volume 28, Issue S2
Code
HSD45
Topic
Health Policy & Regulatory, Health Service Delivery & Process of Care, Medical Technologies
Disease
Diabetes/Endocrine/Metabolic Disorders (including obesity), Oncology, Sensory System Disorders (Ear, Eye, Dental, Skin)