BEYOND THE BLACK BOX: A NICE- AND CDA-ALIGNED TRANSPARENT GENERATIVE AI FRAMEWORK FOR HEOR EVIDENCE SYNTHESIS AND GENERATION
Author(s)
Barinder Singh, RPh1, Shubhram Pandey, MSc1, Nicola Waddell, HNC2, Inderpreet S. Marwaha, MSc, RPh1, Rajdeep Kaur, PhD1;
1Pharmacoevidence Pvt. Ltd., Mohali, India, 2Pharmacoevidence Pvt. Ltd., London, United Kingdom
1Pharmacoevidence Pvt. Ltd., Mohali, India, 2Pharmacoevidence Pvt. Ltd., London, United Kingdom
OBJECTIVES: Generative artificial intelligence (GenAI) offers transformative potential for evidence synthesis, yet the adoption in health technology assessment (HTA) remains limited due to its ‘black box’ nature. We propose a comprehensive GenAI implementation framework aimed at enhancing its acceptability across global HTA bodies
METHODS: We synthesized core requirements from the AI position statements of HTA bodies (NICE, CDA-AMC) and methodological standards groups (ISPOR, HTAi, and the Cochrane RAISE framework). A structured framework was developed by mapping high-level regulatory expectations regarding transparency and accountability to actionable operational steps, specifically designed to mitigate trust-deficits and increase the acceptability of GenAI for HEOR processes
RESULTS: The framework consists of five core pillars: (1) Human-in-the-loop: Defines AI as an assistive tool, where submitting organizations retains the accountability and implements safeguards to preserve critical appraisal skills of human reviewers; (2) Compliance and Security: Ensures alignment with regional AI laws, auditability of decision pathways, and protection against security threats (prompt injection attacks); (3) Quality and Bias Control: Mandates formal bias checks and reporting, comparison against human benchmarks, and strict fact-checking to mitigate AI hallucinations while ensuring AI outputs are accurate and reliable; (4) Reproducibility and Research Integrity: Enforces locked workflows, version control, and standardized reporting templates (adhering to ELEVATE-GenAI domains) to ensure outputs remain reproducible and contestable; (5) Intellectual Property and Legal Integrity: Assigns liability to the submitting organization, mandating compliance with copyright laws and licensing agreements to ensure all AI-generated evidence is legally defensible
CONCLUSIONS: By applying this framework to standard HEOR use-cases (including SLR, RWE synthesis, and health-economic modelling), we demonstrate that GenAI can be a reliable tool for evidence synthesis and generation. Ensuring that outputs remain traceable, inspectable, and human-verified effectively bridges the "acceptability gap" and satisfies the rigorous transparency standards of global HTA bodies
METHODS: We synthesized core requirements from the AI position statements of HTA bodies (NICE, CDA-AMC) and methodological standards groups (ISPOR, HTAi, and the Cochrane RAISE framework). A structured framework was developed by mapping high-level regulatory expectations regarding transparency and accountability to actionable operational steps, specifically designed to mitigate trust-deficits and increase the acceptability of GenAI for HEOR processes
RESULTS: The framework consists of five core pillars: (1) Human-in-the-loop: Defines AI as an assistive tool, where submitting organizations retains the accountability and implements safeguards to preserve critical appraisal skills of human reviewers; (2) Compliance and Security: Ensures alignment with regional AI laws, auditability of decision pathways, and protection against security threats (prompt injection attacks); (3) Quality and Bias Control: Mandates formal bias checks and reporting, comparison against human benchmarks, and strict fact-checking to mitigate AI hallucinations while ensuring AI outputs are accurate and reliable; (4) Reproducibility and Research Integrity: Enforces locked workflows, version control, and standardized reporting templates (adhering to ELEVATE-GenAI domains) to ensure outputs remain reproducible and contestable; (5) Intellectual Property and Legal Integrity: Assigns liability to the submitting organization, mandating compliance with copyright laws and licensing agreements to ensure all AI-generated evidence is legally defensible
CONCLUSIONS: By applying this framework to standard HEOR use-cases (including SLR, RWE synthesis, and health-economic modelling), we demonstrate that GenAI can be a reliable tool for evidence synthesis and generation. Ensuring that outputs remain traceable, inspectable, and human-verified effectively bridges the "acceptability gap" and satisfies the rigorous transparency standards of global HTA bodies
Conference/Value in Health Info
2026-05, ISPOR 2026, Philadelphia, PA, USA
Value in Health, Volume 29, Issue S6
Code
MSR27
Topic
Methodological & Statistical Research
Topic Subcategory
Artificial Intelligence, Machine Learning, Predictive Analytics
Disease
No Additional Disease & Conditions/Specialized Treatment Areas