Comprehensive Uncertainty Assessment in Economic Evaluations of AI-Based Health Technologies: Pitfalls and Recommendations

Author(s)

Mabel Wieman, PhD student1, Bram Ramaekers, Senior researcher1, Laure Wynants, Assistant professor1, Andrea Gabrio, Assistant professor1, Nigel Armstrong, Health economist manager2, Marie Westwood, Senior researcher3, Manuela Joore, Professor1, Sabine Grimm, Senior researcher1.
1Maastricht University, Maastricht, Netherlands, 2Kleijnen Systematic Reviews Ltd., York, United Kingdom, 3University of Bristol, Bristol, United Kingdom.
OBJECTIVES: There is an increasing potential for using artificial intelligence (AI)-based technologies in health care, but evidence is often inadequate to support reimbursement decisions. While uncertainty is common in the economic evaluation (EE) of any health technology, there is a particular lack of context-specific evidence in model-based EEs of AI. We highlight common uncertainties in the assessment of AI-based health technologies, explore how these are currently assessed, and formulate recommendations for practice and research.
METHODS: We identified common uncertainties in the assessment of AI-based health technologies based on literature. Through a review of papers identified in a published systematic literature review on EEs of AI, we explored how these uncertainties were assessed. Based on this, we developed recommendations.
RESULTS: Uncertainties within EEs of AI are often caused by unavailability or indirectness of the evidence relating to: 1) a lack of transportability across settings, 2) the effects of human-AI collaboration on effectiveness, and 3) evidence is quickly outdated due to AI’s dynamic nature. Transportability and human-AI collaboration were occasionally addressed in existing EEs, but the dynamic nature of AI was not. We recommend the use of Grading of Recommendations Assessment, Development and Evaluation (GRADE) and Transparent Uncertainty Assessment Tool (TRUST) to systematically identify uncertainties. Structured expert elicitation (SEE), prior specification, discrepancy analyses, scenario analyses, model averaging and value-of-information analyses are potentially promising methods for addressing all uncertainties. To address transportability, we recommend random-effects meta-analysis and calibration. For human-AI collaboration, the inclusion of technology acceptance frameworks in the SEE exercise may be useful to detect and adjust for any bias. For the dynamic nature, we recommend iterative assessment methods.
CONCLUSIONS: We developed recommendations for practice and research for uncertainty assessment methods of AI-based health technologies, which we hope will be useful in future assessments.

Conference/Value in Health Info

2025-11, ISPOR Europe 2025, Glasgow, Scotland

Value in Health, Volume 28, Issue S2

Code

EE149

Topic

Economic Evaluation, Health Technology Assessment, Methodological & Statistical Research

Topic Subcategory

Value of Information

Disease

No Additional Disease & Conditions/Specialized Treatment Areas

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×