WHY AI PROOF-OF-CONCEPTS DO NOT NECESSARILY PROVE ANYTHING: JOURNEY FROM POC TO PRODUCT

Author(s)

Hanan Irfan, MSc1, Tushar Srivastava, MSc2;
1ConnectHEOR, Delhi, India, 2ConnectHEOR, London, United Kingdom
OBJECTIVES: Artificial intelligence (AI) proof-of-concepts (PoCs) are increasingly used in Health Economics and Outcomes Research (HEOR) to demonstrate feasibility and innovation. However, successful PoCs often fail to translate into reliable, HTA-ready products, creating a false sense of validation. This study examined why AI PoCs do not necessarily demonstrate real-world suitability for HEOR and identified the methodological, governance, and operational gaps that must be addressed to progress from PoC to deployable product.
METHODS: A qualitative analysis was conducted across multiple AI-enabled HEOR initiatives spanning literature review support, economic modelling, real-world evidence generation, and technical reporting. PoC designs were compared with production-ready requirements across five dimensions: (1) representativeness of test data, (2) robustness to edge cases and data drift, (3) transparency and auditability of outputs, (4) integration with HEOR quality management systems, and (5) accountability and ownership post-deployment. Failure modes were synthesised into a PoC-to-product maturity framework tailored to HEOR and HTA contexts.
RESULTS: PoCs were typically optimised to demonstrate technical feasibility under constrained, low-risk conditions, often relying on curated datasets, limited scenarios, and implicit expert supervision. These conditions masked key failure modes encountered during scale-up, including loss of traceability, inconsistent performance across disease areas, sensitivity to evolving evidence, and unclear accountability for errors influencing pricing or reimbursement decisions. Successful transition to product required substantial redesign beyond the PoC stage, including explicit validation thresholds, reproducibility controls, governance checkpoints, and post-deployment monitoring aligned with HTA expectations.
CONCLUSIONS: In HEOR, AI PoCs demonstrate possibility, not readiness. Treating PoCs as evidence of validity risks premature deployment of tools that fail under HTA scrutiny. Progressing from PoC to product requires reframing success criteria around governance, auditability, reproducibility, and decision impact, positioning AI development as a regulated analytical lifecycle rather than a one-time technical achievement.

Conference/Value in Health Info

2026-05, ISPOR 2026, Philadelphia, PA, USA

Value in Health, Volume 29, Issue S6

Code

MSR43

Topic

Methodological & Statistical Research

Topic Subcategory

Artificial Intelligence, Machine Learning, Predictive Analytics

Disease

No Additional Disease & Conditions/Specialized Treatment Areas

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×