Artificial Intelligence in Health Economics and Outcomes Research: Highlighting the Contributions of Early Career Researchers

Abstract

 

 

For 60 years, the PhRMA Foundation has supported promising early career researchers through its grants and fellowships. Through our Challenge Award competitions with journals such as Value in Health, we strive to encourage and amplify the voices of early career investigators in important areas of research.

The PhRMA Foundation’s expert review committee selected 4 articles in this themed section for a $5000 Trainee Challenge Award. These trainee first authors are future leaders in health economics and outcomes research (HEOR), and the Foundation is pleased to recognize their expertise in tackling such a pressing and challenging topic. Below is a brief summary of the 4 award-winning articles:

 

Unravelling Public Preferences for the Use of Artificial Intelligence Mobile Health Applications in Australia.
Vinh Vo, MEPP, Maame E. Woode, PhD, Stacy M. Carter, PhD, Chris Degeling, PhD, Gang Chen, PhD

Vo et al present an excellent example of how researchers can and should strive to understand patient perspectives on the use of artificial intelligence (AI) in healthcare. The authors conducted a discrete choice experiment to assess Australian public preferences around the use of AI-based mobile health applications for heart disease and mental health. They found that respondents’ top concerns were the accuracy of AI results and the integration of AI with human doctor expertise. This study is an important springboard for understanding consumer preferences around the acceptability of AI technologies in healthcare.


Roles of AI-Based Synthetic Data in Health Economics and Outcomes Research.
Tim C. Lai, BS, and Surachat Ngorsuraches, PhD

The article by Lai and Ngorsuraches serves as a useful primer on the topic of synthetic data and potential applications in HEOR. The authors describe several data-associated challenges in HEOR that synthetic data could help solve, such as insufficient data for underrepresented populations and rare diseases. They also lay out the gaps in knowledge and policy that must be addressed to advance the use of synthetic data in HEOR. The authors recommend the development of an evaluation framework to facilitate synthetic data adoption in HEOR.


Use of Large Language Models to Extract Cost-Effectiveness Analysis Data: A Case Study.
Xujun Gu, MSPH, Hanwen Zhang, MS, Divya Patil, MS, Zafar Zafari, PhD, Julia Slejko, PhD, Eberechukwu Onukwugha, PhD

In this case study, Gu et al address an important and practical question for researchers: Can large language models (LLMs) help to streamline the collection of data from cost-effectiveness analyses (CEAs)? The authors evaluated the performance of a custom ChatGPT model in pulling specific data points from a selection of 34 articles compared with the Tufts CEA Registry and researcher-validated data. This study showed that although LLMs may offer comparable accuracy to established registries, human supervision and expertise are essential for effective use. The study also demonstrates the significant role of transparent and thoughtful prompt engineering for this type of research.


Role of Generative Artificial Intelligence in Assisting Systematic Review Process in Health Research: A Systematic Review.
Muhammed Rashid, PhD, Cheng Su Yi, MHS, Thipsukhon Sathapanasiri, MD, Sariya Udayachalerm, PhD, Kansak Boonpattharatthiti, PharmD, Suppachai Insuk, PharmD, Sajesh K. Veettil, PhD, Nai Ming Lai, MBBS, Nathorn Chaiyakunapruk, PhD, Teerapon Dhippayom, PhD, for the Generative Artificial Intelligence for Navigating Systematic Reviews (GAINSR) Working Group

The systematic review by Rashid et al is a nice complement to the previous article on LLMs because it also explores the practicality of using an AI-based tool to conduct studies more efficiently. This article summarizes the evidence from 30 studies examining the use of generative AI in the systematic review process. The authors conclude that although generative AI tools show promise in supporting certain systematic review tasks, such as data extraction, their performance on other tasks, such as literature search, is inconsistent. They state that further development and validation are needed before these tools can be reliably integrated into various stages of systematic reviews.


As HEOR professionals face increasing pressure to incorporate AI tools into their work, it is critical that they understand the benefits and challenges of using them. The collective insight from these articles is that although AI tools hold great promise for HEOR, researchers should proceed with caution, taking a thoughtful and transparent approach to their use. Combining these new technologies with adequate human oversight can help ensure researchers are balancing innovation with rigor and patient preferences.

 

Article and Author Information

Authorship Confirmation: All authors certify that they meet the ICMJE criteria for authorship.
Funding/Support: The authors received no financial support for this research.

 

 

Authors

Amy M. Miller Emily Ortman

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×