Bridging the Gap Between AI Principles and Practice: A Comparative Evaluation of AI Position Statements and Frameworks for Responsible Use

Author(s)

Amelia Peddle, MSc, Shona Lang, PhD, Emily Hardy, MBiol.
Petauri Evidence, Bicester, United Kingdom.
OBJECTIVES: Considering the conflicting benefits and concerns around artificial intelligence (AI) use in scientific research, the current study was conducted to compare relevant position statements and assess the suitability of currently available AI frameworks for guiding responsible use of AI in systematic literature reviews (SLRs).
METHODS: Targeted searches of health technology assessment (HTA) websites (including NICE, SMC, CADTH, HAS, IQWiG, and FDA) and evidence synthesis guidelines (including PRISMA and Cochrane) plus desktop research were conducted to identify position statements and frameworks from key decision bodies. Details were extracted in Excel® to ensure consistent comparison of position statement content and thorough collation of relevant evidence.
RESULTS: HTA and guideline body AI position statements reported consistent themes, with differences only in the level of detail, varying from comprehensive granular guidance to top-level broad scopes. Themes included methodological transparency, traceability of AI-assisted processes, human oversight, and accessibility. Most notably, NICE, PRISMA-AI, and Cochrane provide guidance that is unambiguous and based around integration, reporting, and validation of AI, whereas FDA and IQWiG stress compliance with regulations and the importance of mitigating risk. Several published and emerging frameworks were identified for the critical appraisal of AI in SLRs, including: ELEVATE (supports structured reporting and traceability), RAISE (focuses on reproducibility and risk assessment) and TRIPOD (provides principles for transparent model reporting). Whilst these frameworks collectively address many critical concerns of AI in SLRs, several gaps remain, including standards for continuous real-time validations of AI systems, inconsistencies in acceptable levels of human-AI interaction, and limited tools for the assessment of ethical and practical risks of AI in study selection and screening.
CONCLUSIONS: To address the notable framework gaps and emerging challenges from the dynamic AI landscape, continued collaboration among HTA bodies, researchers, and regulators is needed; this is essential for ethical and effective integration of AI in evidence synthesis.

Conference/Value in Health Info

2025-11, ISPOR Europe 2025, Glasgow, Scotland

Value in Health, Volume 28, Issue S2

Code

OP1

Topic

Methodological & Statistical Research, Organizational Practices, Study Approaches

Topic Subcategory

Best Research Practices

Disease

No Additional Disease & Conditions/Specialized Treatment Areas

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×