AI Agents and Guardrails in HEOR: The Ultimate Solution to GenAI Shortcomings or Just Another Overhyped Tool?

Moderator

Foluso O Agboola, MPH, MD, Institute for Clinical and Economic Review (ICER), Boston, MA, United States

Speakers

Sven L Klijn, MSc, Bristol Myers Squibb, Princeton, NJ, United States; Tim Disher, BSc, RN, PhD, Sandpiper Analytics, West Porters Lake, NS, Canada; Ghayath Janoudi, MSc, PhD, Loon, Cantley, QC, Canada

ISSUE: As AI agents (autonomous systems leveraging large language models (LLMs) and related algorithms) become more prevalent in HEOR, a critical debate emerges: Should these agents be widely integrated into decision-making processes, given ongoing concerns related to transparency, ethical implications, and accountability? Are robust guardrails (structured policies, mechanisms, and technical controls that ensure outputs are safe, accurate, and ethical) sufficient to mitigate these risks? This session will debate whether AI agents’ promise outweighs the potential hazards. OVERVIEW: Dr. Foluso Agboola will begin the session by providing a short history of the evolution of AI implementation in HEOR (5 minutes).This issue panel will feature three distinct perspectives: Mr. Sven Klijn (12 minutes): Will define AI agents and highlight their capabilities, illustrating how automation supports scalability, consistency, and performance uplift. This perspective will argue that integrating AI agents can advance HEOR, optimize resources, and enable informed decisions. Dr. Tim Disher (12 minutes): Will focus on the risks and negative externalities, including lack of transparency, accountabilities, ethical concerns, and unintended consequences. This perspective will challenge assumptions about unfettered AI adoption, urging caution and oversight. Dr. Ghayath Janoudi (12 minutes): Will discuss the concept of guardrails—defining them as structured safeguards to ensure responsible deployment. By exploring HTA guidance, best practices, and accountability measures, this perspective will assert that AI agents can be harnessed responsibly without stifling innovation. Following these presentations, there will be a 15-minute audience Q&A and debate period, offering the audience actionable insights to navigate the balance between AI-driven potential and the imperative for integrity, transparency, and trust.

Code

070

Topic

Health Technology Assessment, Methodological & Statistical Research, Study Approaches

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×