Artificial Intelligence Bias in Systematic Literature Reviews (SLRS) for Health Technology Assessment (HTA)
Author(s)
Mangat G1, Sharma S1, Bergemann R2
1Parexel International, Mohali, India, 2Parexel International, Basel, Switzerland
Presentation Documents
OBJECTIVES: SLRs are an essential tool for evaluating the effectiveness of health technologies and informing policy decisions. In the last few years, there has been an influx of artificial intelligence (AI) based tools to conduct literature reviews. However, bias can occur at various stages of this process. We aim to explore the issue of AI bias in SLRs for HTA.
METHODS: We conducted in-house research to explore the potential AI-related bias in SLRs.
RESULTS: Typical sources of bias in SLRs include the study selection for inclusion/exclusion, interpretation, and synthesis of the evidence. AI tools may exacerbate these biases if the algorithms used are trained on biased data or are not transparent in their decision-making processes. A key limitation of AI tools is their black-box nature, i.e., it is unclear exactly how or why a decision has been made. In study selection, this may lead to inaccurate inclusion/exclusion, i.e., over-inclusion or missing relevant studies. AI tools also contribute to bias in the synthesis process if they are not programmed to consider all relevant factors or are not transparent in their decision-making processes. Finally, AI tools contribute to bias in the reporting process if they are not programmed to represent the results accurately, including all study and patient characteristics. To address these issues, it is important to use rigorous and transparent methods for selecting, synthesizing, and reporting the evidence in SLRs for HTA, e.g., rule-based programming. This may include using standardized tools to assess the evidence's quality and statistical techniques to synthesize the results systematically and unbiasedly. A diverse group of reviewers is also essential to ensure that different perspectives are considered.
CONCLUSIONS: By addressing AI bias in SLRs for HTA, the reliability and accuracy of reviews conducted with the help of AI can be improved to ensure that they inform evidence-based policy decisions.
Conference/Value in Health Info
Value in Health, Volume 26, Issue 6, S2 (June 2023)
Code
HTA101
Topic
Methodological & Statistical Research, Organizational Practices, Study Approaches
Topic Subcategory
Academic & Educational, Artificial Intelligence, Machine Learning, Predictive Analytics, Best Research Practices, Literature Review & Synthesis
Disease
No Additional Disease & Conditions/Specialized Treatment Areas