Abstract
Objectives
To address the methodological challenges in health economic evaluations (HEEs) of artificial intelligence (AI), this study systematically reviewed AI-assisted cancer screening or diagnosis HEEs, focusing on their methodologies and reporting quality, and providing advice for future HEEs.
Methods
We systematically searched 5 databases from inception to December 2024 following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines. Two researchers independently screened studies, extracted data, and performed a descriptive analysis. The reporting quality of the selected studies was assessed using the Consolidated Health Economic Evaluation Reporting Standards for Interventions That Use Artificial Intelligence (CHEERS-AI) and the Philips checklist.
Results
A total of 17 studies evaluating AI-assisted screening or diagnosis across 8 cancer types were included. Artificial intelligence was primarily applied to enhance the sensitivity or specificity of the cancer screening or diagnosis process. The Markov model was the most frequently used, with cohort simulation remaining the most popular simulation method. Findings from the main analyses suggested cost-effectiveness, although sensitivity analyses indicated inconsistent results. Regarding quality, the Philips checklist indicated omissions in general modeling elements such as half-cycle correction, data quality assessment, and subgroup analysis. Meanwhile, the CHEERS-AI assessment revealed that AI-specific items were not adequately addressed, particularly the measurement and modeling of AI learning over time, population differences, and implementation aspects.
Conclusions
A limitation was the considerable heterogeneity in reporting quality among the included HEEs. AI-specific items in HEEs should be more comprehensively addressed in line with the CHEERS-AI checklist.
Authors
Yuyanzi Zhang Lei Wang Yifang Liang Annushiah Vasan Thakumar Hongfei Hu Yan Li Aixia Ma Hongchao Li Luying Wang