MAPPING HUMAN EFFORT DISPLACEMENT IN AI-ASSISTED HEOR WORKFLOWS
Author(s)
Hanan Irfan, MSc1, Tushar Srivastava, MSc2, Shilpi Swami, MSc2;
1ConnectHEOR, Delhi, India, 2ConnectHEOR, London, United Kingdom
1ConnectHEOR, Delhi, India, 2ConnectHEOR, London, United Kingdom
OBJECTIVES: The integration of artificial intelligence (AI) into Health Economics and Outcomes Research (HEOR) workflows alters how human effort is allocated across evidence generation and decision-support activities. Rather than functioning solely as a net reducer of workload, AI-assisted tools may displace effort toward oversight, governance, and accountability tasks. This study aimed to quantify and characterise how human time, cognitive load, and expert responsibility are redistributed across HEOR workflows following AI adoption.
METHODS: A comparative workflow analysis was conducted contrasting traditional and AI-assisted processes across five HEOR domains: systematic literature review support, evidence synthesis, economic modelling, dossier drafting, and quality assurance (QA). Tasks within each domain were decomposed into discrete steps and classified as automated, partially automated, human-led, or newly introduced through AI adoption. Human effort was assessed across four dimensions: time allocation, cognitive load, required expertise level, and accountability. Analyses were stratified by use case, including exploratory analyses, internal decision support, and HTA-facing deliverables.
RESULTS: AI assistance reduced time spent on procedural execution tasks, such as data extraction and initial drafting, by approximately 40-60%. However, this reduction was accompanied by a redistribution of effort toward higher-cognition activities. New categories of human effort emerged, including prompt formulation and iteration (approximately 15% of total workflow time) and increased validation and contradiction resolution during QA, associated with a substantial increase in cognitive load. Additional effort was required to manage audit trails and traceability for HTA-facing outputs. Overall, expertise requirements shifted from content production toward content adjudication, with senior methodological judgement increasingly becoming the primary constraint in AI-assisted workflows.
CONCLUSIONS: AI-assisted HEOR workflows primarily reallocate, rather than eliminate, human effort. Efficiency gains are most reliable when AI supports the generation of candidate outputs, while humans retain responsibility for interpretation, methodological judgement, validation, and accountability.
METHODS: A comparative workflow analysis was conducted contrasting traditional and AI-assisted processes across five HEOR domains: systematic literature review support, evidence synthesis, economic modelling, dossier drafting, and quality assurance (QA). Tasks within each domain were decomposed into discrete steps and classified as automated, partially automated, human-led, or newly introduced through AI adoption. Human effort was assessed across four dimensions: time allocation, cognitive load, required expertise level, and accountability. Analyses were stratified by use case, including exploratory analyses, internal decision support, and HTA-facing deliverables.
RESULTS: AI assistance reduced time spent on procedural execution tasks, such as data extraction and initial drafting, by approximately 40-60%. However, this reduction was accompanied by a redistribution of effort toward higher-cognition activities. New categories of human effort emerged, including prompt formulation and iteration (approximately 15% of total workflow time) and increased validation and contradiction resolution during QA, associated with a substantial increase in cognitive load. Additional effort was required to manage audit trails and traceability for HTA-facing outputs. Overall, expertise requirements shifted from content production toward content adjudication, with senior methodological judgement increasingly becoming the primary constraint in AI-assisted workflows.
CONCLUSIONS: AI-assisted HEOR workflows primarily reallocate, rather than eliminate, human effort. Efficiency gains are most reliable when AI supports the generation of candidate outputs, while humans retain responsibility for interpretation, methodological judgement, validation, and accountability.
Conference/Value in Health Info
2026-05, ISPOR 2026, Philadelphia, PA, USA
Value in Health, Volume 29, Issue S6
Code
MSR44
Topic
Methodological & Statistical Research
Topic Subcategory
Artificial Intelligence, Machine Learning, Predictive Analytics
Disease
No Additional Disease & Conditions/Specialized Treatment Areas