The Health Economic Case for AI in Radiology: A Systematic Review and Appraisal of Evaluation Quality and Methods
Author(s)
Lucy Gregory, BSc, MSc1, Federica Zanca, PhD2, Felicity Lock, MBBS3, Hugh Harvey, MD3.
1Health Economics and Outcomes Research, Hardian Health, Haywards Heath, United Kingdom, 2European Innovation Council, European Comisssion, Brussels, Belgium, 3Clinical, Hardian Health, Haywards Heath, United Kingdom.
1Health Economics and Outcomes Research, Hardian Health, Haywards Heath, United Kingdom, 2European Innovation Council, European Comisssion, Brussels, Belgium, 3Clinical, Hardian Health, Haywards Heath, United Kingdom.
OBJECTIVES: To identify and synthesise health economic models evaluating artificial intelligence (AI) technologies in radiology; to assess the methodological characteristics of these evaluations; and to compare evaluation quality, and funding sources across different AI device categories.
METHODS: A systematic literature review was conducted in MEDLINE and Cochrane Central, covering publications from January 2014 to March 2025. Studies were included if they reported economic models for AI tools used in radiology. Inclusion criteria followed a PICO framework; only peer-reviewed, English-language, full-text studies were considered. Title and abstract screening was performed independently by two reviewers. Conflicts were resolved through consensus review. Full-text screening and data extraction were conducted by two reviewers using a pre-defined template covering AI algorithm details, imaging modality and type of formal economic evaluation.
RESULTS: From 433 initial records, 24 studies were included. 83% of studies were published after 2020 and the most in a single year was 7(29%) in 2024. The most common imaging modality was computed tomography (39%), followed by radiograph (29%). Only one study evaluated AI to assist an MRI work stream. AI was evaluated in either a routine care setting (67%), screening programme (25%) or both (8%). The majority of studies (71%) conducted a form of cost-effectiveness analysis. Of these, 72% reported cost per QALY ICER. Authors did not commonly report the choice of modelling approach (42%), 17% reported decision trees, 13% markov model and 29% hybrid models. Using the CHEERS-AI, only 33% of evaluations scored higher than 70/100. 38% of studies received national funding and 33% reported no funding.
CONCLUSIONS: Few economic evaluations of AI in radiology have been conducted until recently. The majority of studies report economic outcomes that align with decision making criteria of health technology assessment bodies. However, the quality of evaluations overall is poor due to the lack of reported methods.
METHODS: A systematic literature review was conducted in MEDLINE and Cochrane Central, covering publications from January 2014 to March 2025. Studies were included if they reported economic models for AI tools used in radiology. Inclusion criteria followed a PICO framework; only peer-reviewed, English-language, full-text studies were considered. Title and abstract screening was performed independently by two reviewers. Conflicts were resolved through consensus review. Full-text screening and data extraction were conducted by two reviewers using a pre-defined template covering AI algorithm details, imaging modality and type of formal economic evaluation.
RESULTS: From 433 initial records, 24 studies were included. 83% of studies were published after 2020 and the most in a single year was 7(29%) in 2024. The most common imaging modality was computed tomography (39%), followed by radiograph (29%). Only one study evaluated AI to assist an MRI work stream. AI was evaluated in either a routine care setting (67%), screening programme (25%) or both (8%). The majority of studies (71%) conducted a form of cost-effectiveness analysis. Of these, 72% reported cost per QALY ICER. Authors did not commonly report the choice of modelling approach (42%), 17% reported decision trees, 13% markov model and 29% hybrid models. Using the CHEERS-AI, only 33% of evaluations scored higher than 70/100. 38% of studies received national funding and 33% reported no funding.
CONCLUSIONS: Few economic evaluations of AI in radiology have been conducted until recently. The majority of studies report economic outcomes that align with decision making criteria of health technology assessment bodies. However, the quality of evaluations overall is poor due to the lack of reported methods.
Conference/Value in Health Info
2025-11, ISPOR Europe 2025, Glasgow, Scotland
Value in Health, Volume 28, Issue S2
Code
SA93
Topic
Economic Evaluation, Medical Technologies, Study Approaches
Topic Subcategory
Literature Review & Synthesis
Disease
No Additional Disease & Conditions/Specialized Treatment Areas