The AI-in-RWE Transparency (AIRT) Checklist: Essential and Desirable Standards for AI-Enhanced Real-World Evidence
Author(s)
Tushar Srivastava, MSc1, Radha Sharma, PhD2, Raju Gautam, PhD1, Madhusudan Kabra, MSc3.
1ConnectHEOR, London, United Kingdom, 2ConnectHEOR, Edmonton, AB, Canada, 3MK Global Consulting, London, United Kingdom.
1ConnectHEOR, London, United Kingdom, 2ConnectHEOR, Edmonton, AB, Canada, 3MK Global Consulting, London, United Kingdom.
OBJECTIVES: Artificial intelligence (AI) is becoming integral to analytics that transform real-world data (RWD) into real-world evidence (RWE) for regulatory and health technology assessment (HTA) decisions. However, existing reporting checklists, originally designed for conventional statistical studies, seldom address the additional complexity introduced by machine-learning workflows. In AI-enabled RWE, crucial elements such as model training, performance validation, explainability, and bias mitigation are often inconsistently reported or omitted, limiting reproducibility and weakening decision-maker confidence. The growing regulatory focus on trustworthy AI, exemplified by the European Union’s AI Act, further reinforces the need for clear documentation of AI system development, validation, and governance, particularly in high-risk domains like healthcare.
METHODS: This study outlines a structured, open-access checklist AI-in-RWE Transparency (AIRT) that distinguishes between items considered essential for credibility and those deemed desirable based on context. Checklist items were systematically mapped from regulatory guidance, methodological position papers, recent AI-enabled RWD publications, and stakeholder interviews spanning HTA agencies, life-science companies, and patient representatives. A multidisciplinary working group iteratively refined the checklist content and structure through facilitated discussions and written feedback, with an emphasis on practicality, transparency, and accuracy.
RESULTS: The emerging AIRT checklist captures reporting expectations across domains such as data source, algorithm development, validation processes, interpretability, transparency, adaptive model governance, and communication of uncertainty. A companion digital tool is in development to guide users through each domain, generate structured transparency summaries, and highlight potential reporting gaps. Early pilot mapping of published studies indicates that consistent use of AIRT could improve clarity and streamline both internal and external review processes.
CONCLUSIONS: AIRT responds to an urgent gap in guidance for AI-enhanced RWE. By clarifying what must be reported and what should be reported, it aims to support both assessors and assessees in navigating an evolving regulatory landscape while encouraging trust, reproducibility, and informed healthcare decisions.
METHODS: This study outlines a structured, open-access checklist AI-in-RWE Transparency (AIRT) that distinguishes between items considered essential for credibility and those deemed desirable based on context. Checklist items were systematically mapped from regulatory guidance, methodological position papers, recent AI-enabled RWD publications, and stakeholder interviews spanning HTA agencies, life-science companies, and patient representatives. A multidisciplinary working group iteratively refined the checklist content and structure through facilitated discussions and written feedback, with an emphasis on practicality, transparency, and accuracy.
RESULTS: The emerging AIRT checklist captures reporting expectations across domains such as data source, algorithm development, validation processes, interpretability, transparency, adaptive model governance, and communication of uncertainty. A companion digital tool is in development to guide users through each domain, generate structured transparency summaries, and highlight potential reporting gaps. Early pilot mapping of published studies indicates that consistent use of AIRT could improve clarity and streamline both internal and external review processes.
CONCLUSIONS: AIRT responds to an urgent gap in guidance for AI-enhanced RWE. By clarifying what must be reported and what should be reported, it aims to support both assessors and assessees in navigating an evolving regulatory landscape while encouraging trust, reproducibility, and informed healthcare decisions.
Conference/Value in Health Info
2025-11, ISPOR Europe 2025, Glasgow, Scotland
Value in Health, Volume 28, Issue S2
Code
MSR195
Topic
Methodological & Statistical Research
Topic Subcategory
Artificial Intelligence, Machine Learning, Predictive Analytics
Disease
No Additional Disease & Conditions/Specialized Treatment Areas