Can Artificial Intelligence (AI) be Used to Improve the Efficiency of Title and Abstract Citation Screening?

Author(s)

Eichinger C1, Evuarherhe O1, Law L1, Radwan M1, Liew A1, Holman N1, Sellick C1, Cadwell T2, Hugo M2, Wager K3
1Oxford PharmaGenesis, Oxford, UK, 2Vyasa, Boston, MA, USA, 3Oxford PharmaGenesis, Tubney, UK

Presentation Documents

OBJECTIVES: Literature searches for systematic reviews (SRs) often return a large number of studies for reviewers to screen. We developed an AI-assisted human-in-the-loop screening tool leveraging named-entity recognition, a natural language-processing method. The tool identifies and highlights biomedical concepts and prespecified keywords to filter text and to guide reviewers toward relevant text in the title and abstract. We hypothesized that highlighted keywords and concepts may reduce the time taken to decide which references to include or to exclude. We aimed to evaluate the time spent and accuracy of AI assisted screening using the tool compared with unassisted screening in Excel.

METHODS: We used 500 abstracts from a previously completed SR, which were screened by seven reviewers (250 in Excel and 250 with the tool). To mitigate interpersonal variability and an increase in screening speed over time, abstracts were chosen at random, and the order of assisted and unassisted screening was varied across reviewers. Full control of decision-making remained with the reviewers. Screening was timed and accuracy for include or exclude decisions scored against a double-blind screened data set from the previous SR. A paired t-test and a Mann–Whitney U test were performed to compare the differences in time and accuracy between methods.

RESULTS: The average (interquartile range) time taken to screen 250 abstracts in Excel was 176.4 (90.5–194.5) minutes compared with 136.3 (77.0–154.5) minutes using the tool; p = 0.02. Accuracy was 88% for both approaches; p = 0.75. Screeners provided positive feedback about the text-highlighting function, stating it made the screening easier than when using Excel.

CONCLUSIONS: Citation screening was 23% faster with the tool than with Excel; accuracy was similar for both approaches and decisions were fully controlled by the reviewers in both cases. These findings support further development of the tool to reduce manual screening burden within SRs.

Conference/Value in Health Info

2022-11, ISPOR Europe 2022, Vienna, Austria

Value in Health, Volume 25, Issue 12S (December 2022)

Code

MSR107

Topic

Methodological & Statistical Research, Study Approaches

Topic Subcategory

Artificial Intelligence, Machine Learning, Predictive Analytics, Literature Review & Synthesis

Disease

No Additional Disease & Conditions/Specialized Treatment Areas

Explore Related HEOR by Topic


Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×