SECURE DEPLOYMENT OF LARGE LANGUAGE MODELS IN HEOR: GOVERNANCE, INFRASTRUCTURE, AND RISK MITIGATION BEST PRACTICES
Author(s)
Barinder Singh, RPh1, Haseeb Raza, MCA1, Inderpreet S. Marwaha, MSc, RPh1, Shubhram Pandey, MSc1, Ritesh Dubey, PharmD2, Rajdeep Kaur, PhD1;
1Pharmacoevidence Pvt. Ltd., Mohali, India, 2Pharmacoevidence, Mohali, India
1Pharmacoevidence Pvt. Ltd., Mohali, India, 2Pharmacoevidence, Mohali, India
OBJECTIVES: Large Language Models (LLMs) are increasingly being explored across Health Economics and Outcomes Research (HEOR) use-cases. However, the reliance on consumer-grade AI tools poses significant risks related to data privacy, regulatory compliance, intellectual property protection, and emerging AI-specific security vulnerabilities. This paper proposes regulatory-aligned framework establishing secure and compliant AI deployment principles for responsible use within HEOR workflows
METHODS: A conceptual framework was developed based on targeted review of healthcare data protection regulations, enterprise security standards, and emerging AI governance and risk management practices. Potential security risks were assessed across the HEOR AI lifecycle, including data access, model interaction, and output management. Mitigation strategies were consolidated into core principles for safe deployment
RESULTS: Thefive pillars of framework are: (i) Enterprise-ready infrastructure: Prioritizes LLM deployment on private, high-grade infrastructure, explicitly advising against public LLM interfaces that lack data privacy assurances or immunity from model retraining; (ii) Authentication Control: Enforces strict permission levels (role-based access) via robust multi-factor authentication mechanisms enabling project specific access to confidential data; (iii) Regulatory Compliance: Mandates compliance with AI and data policies, including but not limited to HIPAA and GDPR, end-ot-end data encryption, and permanent audit logs; (iv) Automated Governance Agents: Requires the integration of real-time guardrails and supervisory agents to enforce usage policies and pre-emptively block unsafe or non-compliant outputs; and (v) AI-specific Risk Mitigation: Enforces safeguards against emerging threats such as prompt injection and unintended model behaviours. Collectively, these measures mitigate critical risks related to data sovereignty and regulatory
CONCLUSIONS: Secure adoption of LLMs in HEOR requires a comprehensive deployment and governance strategy that extends beyond model performance considerations. Compliant infrastructure, strong access controls, and AI-specific safeguards can enable responsible innovation while maintaining data security and regulatory alignment. This framework provides guidance for organizations seeking to operationalize LLMs within HEOR environments safely and at scale
METHODS: A conceptual framework was developed based on targeted review of healthcare data protection regulations, enterprise security standards, and emerging AI governance and risk management practices. Potential security risks were assessed across the HEOR AI lifecycle, including data access, model interaction, and output management. Mitigation strategies were consolidated into core principles for safe deployment
RESULTS: Thefive pillars of framework are: (i) Enterprise-ready infrastructure: Prioritizes LLM deployment on private, high-grade infrastructure, explicitly advising against public LLM interfaces that lack data privacy assurances or immunity from model retraining; (ii) Authentication Control: Enforces strict permission levels (role-based access) via robust multi-factor authentication mechanisms enabling project specific access to confidential data; (iii) Regulatory Compliance: Mandates compliance with AI and data policies, including but not limited to HIPAA and GDPR, end-ot-end data encryption, and permanent audit logs; (iv) Automated Governance Agents: Requires the integration of real-time guardrails and supervisory agents to enforce usage policies and pre-emptively block unsafe or non-compliant outputs; and (v) AI-specific Risk Mitigation: Enforces safeguards against emerging threats such as prompt injection and unintended model behaviours. Collectively, these measures mitigate critical risks related to data sovereignty and regulatory
CONCLUSIONS: Secure adoption of LLMs in HEOR requires a comprehensive deployment and governance strategy that extends beyond model performance considerations. Compliant infrastructure, strong access controls, and AI-specific safeguards can enable responsible innovation while maintaining data security and regulatory alignment. This framework provides guidance for organizations seeking to operationalize LLMs within HEOR environments safely and at scale
Conference/Value in Health Info
2026-05, ISPOR 2026, Philadelphia, PA, USA
Value in Health, Volume 29, Issue S6
Code
MSR130
Topic
Methodological & Statistical Research
Topic Subcategory
Artificial Intelligence, Machine Learning, Predictive Analytics
Disease
No Additional Disease & Conditions/Specialized Treatment Areas