Reflections on the Past, Present, and Future of AI in HEOR and Healthcare Decision Making
Rob Abbott, CEO & Executive Director, ISPOR
In 1955, when John McCarthy had the idea to organize a summer workshop at Dartmouth College to develop ideas about “thinking machines”, I wonder if he could have imagined what the future might hold. McCarthy—alongside his workshop co-organizers, Claude Shannon, Nathaniel Rochester, and Marvin Minsky—is today considered to be one of the founding fathers of artificial intelligence (AI).
The history of AI in healthcare can be traced from those formative discussions at Dartmouth College in the 1950s through to the sophisticated machine learning and large language model applications that are rapidly changing the way we think about and undertake diagnostics in medical imaging, power robotic surgeries, accelerate drug discovery, and streamline hospital and clinic administration.
Key milestones in this journey over the past 70 years include:
- The development of MYCIN in the early 1970s. MYCIN was an expert system to diagnose bacterial infections and recommend appropriate antibiotics based on a set of 600 “if-then” rules.
- The application of neural networks and pattern recognition to medical images in the late 1980s and 1990s, and the first commercial computer-aided detection system for mammography receiving approval from the US Food and Drug Administration (FDA) in 1998.
- The onset of IBM Watson in 2011. IBM’s question-answering system, Watson, gained wide prominence by winning the TV show Jeopardy! This helped advance public awareness of AI capabilities and demonstrated Watson’s ability to handle complex natural language queries. Soon after, Watson’s capabilities were applied to healthcare for processing unstructured clinical data and identifying new research insights.
- In 2017, CardioAI received FDA approval for analyzing cardiac MRI images—a significant milestone for clinical integration.
Large language models like ChatGPT have the potential to transform personalized medicine, drug discovery, and clinical decision support through their ability to analyze vast, diverse datasets.
In light of the above, it seems fitting that Value and Outcomes Spotlight should undertake a deep dive into the shape of current thought on AI in health economics and outcomes research (HEOR) and healthcare decision making more broadly. We know, for instance, that natural language processing has advanced significantly in recent years to the point where it is possible to extract information from physician notes and bring the idea of virtual health assistants—Pharmbot—to life. We also know that large language models like ChatGPT have the potential to transform personalized medicine, drug discovery, and clinical decision support through their ability to analyze vast, diverse datasets.
As the papers gathered here make clear, AI has become a dominant theme in HEOR and healthcare. Three examples underscore just how pervasive AI’s influence is in this regard. In next-generation sequencing, AI is used to analyze large data sets, which helps accelerate sequencing, reduce errors, identify genetic variations, and enable personalized medicine. AI-powered tools can also improve disease diagnosis, predict treatment responses, and identify potential drug targets by integrating genomic data with other health information. In evidence synthesis, AI automates various stages of the process, such as literature searching, article screening, and data extraction, which increases efficiency and speed. AI tools such as machine learning and natural language processing can identify relevant studies, extract data, and help with tasks such as trial design and synthesizing information for summaries. Importantly, the role of AI in this context is one of enhancing efficiency; human oversight is still needed—indeed it is critical—for validation and to avoid errors. In medical imaging, AI can enhance the analysis of medical images such as X-rays, MRIs, and CT scans by identifying patterns that human radiologists might miss. These insights help to diagnose diseases earlier and more accurately.
Human oversight is vital to justify causal assumptions, interpret HTA-specific requirements, ensure structural validity, or meet transparency and auditability requirements.
As the use of AI in HEOR and healthcare decision making increases, it is inevitable that we must address ethical and legal considerations. While this is a domain that is changing quickly, it is fair to say that the ethical frameworks for AI in HEOR are generally expansions of the 4 core principles of biomedical ethics: beneficence (doing good), non-maleficence (doing no harm), autonomy (respecting individuals’ decisions), and justice (fairness and equity). On the legal side of the equation, the emerging frameworks reflect a desire to adapt existing law to AI-specific applications, and equally, to develop new AI-specific guidelines and regulations. In the United States, the Health Insurance Portability and Accountability Act sets standards for protecting patient health information. Compliance requires data encryption, access controls, audit trails, and Business Associate Agreements with third-party vendors. In the European Union (EU), the General Data Protection Regulation mandates lawful, transparent data processing for specific purposes and grants individuals rights such as access, correction, deletion, and the right not to be subject to decisions based solely on automated processing. FDA has started elaborating guidance for AI and machine learning as Software as a Medical Device, focusing on safety, effectiveness, and a total product lifecycle approach. The EU AI Act is a key regulation for establishing a uniform legal framework governing the development and use of AI systems within the EU, often following a risk-based approach. I am proud to report that ISPOR has developed guidance and recommendations emphasizing transparency, accountability, and best practices for using AI in HEOR.
Alongside the excitement and rapidly growing profile of AI—and with it, a good deal of hype—it is important to emphasize that the best outcomes will be achieved through a combination of both artificial and human intelligence. For instance, AI is not going to fully automate systematic reviews and evidence synthesis. It can, and likely will, accelerate evidence synthesis, but full automation is not acceptable to regulators or health technology assessment (HTA) bodies. Tools like DistillerAI help with screening and data extraction but require human oversight. In the same vein, AI is not going to replace traditional economic models. Health economists still own the core model logic, and while AI can simulate scenarios, detect model inconsistencies, speed up parameter searches, and generate model code (Markov, partitioned survival, microsimulations), human oversight is vital to justify causal assumptions, interpret HTA-specific requirements, ensure structural validity, or meet transparency and auditability requirements.
This is an exciting time to be in HEOR; the accelerated use of AI represents a new frontier that we need to step onto. As your CEO, I pledge that ISPOR will do so courageously and responsibly.
