Frameworks for the Use of AI Within HEOR: A Targeted Literature Review and Thematic Analysis
Author(s)
Cassandra Springate, PhD, Alexandra Furber, MSc, Andrew Easton, MSc, Eira Fearnall, BSc, Kevin K. Cadwell, PhD.
HEOR Ltd, Cardiff, United Kingdom.
HEOR Ltd, Cardiff, United Kingdom.
OBJECTIVES: The notion that artificial intelligence (AI) has the potential to transform HEOR is hardly a revelation, yet this transformation seems slow to take-off. A key barrier to AI adoption is the uncertainty of it’s acceptance, due in part to a lack of guidelines or governance. The objective of this research is to understand what guidance is available on the use of AI within HEOR, and to evaluate what guidance is necessary.
METHODS: A targeted literature review (TLR) was conducted to identify reports of governance frameworks, guidelines, regulation, and position statements of the use of AI within HEOR. Searches were conducted in MEDLINE, Semantic Scholar, government websites, and governing and regulatory bodies. Thematic analysis was conducted and the utility of recommendations was assessed.
RESULTS: Focussed searches returned 466 records, of which 34 documents were reviewed. Relatively vague language was used in many cases. The most common recommendations were to report clearly when AI is used, to validate the accuracy of the AI, and to retain human oversight of all outputs. No additional detail on what these steps entail was often provided. Useful and insightful guidance came from the FDA’s risk‐based credibility assessment framework, which recommends a risk assessment of the AI use before considering mitigation and reporting guidelines.
CONCLUSIONS: AI is on the tipping point of wide adoption and yet many guidelines offer little practical value, in part, driven by the variety of the potential applications. Guidelines that are clear and specific would be beneficial and, furthermore, could be tailored to each type of evidence generation activity. Until these are available, we recommend organisations take a case-by-case approach in line with the risk‐based credibility assessment framework. Next steps would be to engage with AI development experts to provide more explicit guidance on the potential risks associated with different AI systems.
METHODS: A targeted literature review (TLR) was conducted to identify reports of governance frameworks, guidelines, regulation, and position statements of the use of AI within HEOR. Searches were conducted in MEDLINE, Semantic Scholar, government websites, and governing and regulatory bodies. Thematic analysis was conducted and the utility of recommendations was assessed.
RESULTS: Focussed searches returned 466 records, of which 34 documents were reviewed. Relatively vague language was used in many cases. The most common recommendations were to report clearly when AI is used, to validate the accuracy of the AI, and to retain human oversight of all outputs. No additional detail on what these steps entail was often provided. Useful and insightful guidance came from the FDA’s risk‐based credibility assessment framework, which recommends a risk assessment of the AI use before considering mitigation and reporting guidelines.
CONCLUSIONS: AI is on the tipping point of wide adoption and yet many guidelines offer little practical value, in part, driven by the variety of the potential applications. Guidelines that are clear and specific would be beneficial and, furthermore, could be tailored to each type of evidence generation activity. Until these are available, we recommend organisations take a case-by-case approach in line with the risk‐based credibility assessment framework. Next steps would be to engage with AI development experts to provide more explicit guidance on the potential risks associated with different AI systems.
Conference/Value in Health Info
2025-11, ISPOR Europe 2025, Glasgow, Scotland
Value in Health, Volume 28, Issue S2
Code
OP8
Topic
Health Technology Assessment, Methodological & Statistical Research, Organizational Practices
Disease
No Additional Disease & Conditions/Specialized Treatment Areas