Automated Tools to Support Screening in Literature Reviews: What's out There for Reviewers?

Author(s)

Edwards M1, Watkins D2, Chappell M2, Graziadio S2
1York Health Economics Consortium, York, YOR, UK, 2York Health Economics Consortium, York, UK

OBJECTIVES: With the increasing volume of health-related research evidence available via databases and the internet, the resources required to undertake a systematic or pragmatic review are also increasing. We conducted a pragmatic literature review that highlighted a lack of comparative evaluations of machine learning tools to assist with record screening. Our aim was to select the most promising tools prior to conducting our own comparative study.

METHODS: We conducted a scoping exercise to identify and briefly assess some freely available tools that performed screening or prioritisation of records for systematic or pragmatic reviews. We searched PubMed and the Systematic Review Toolbox to identify potentially relevant tools. We did not consider tools that did not use machine learning, had no graphic user interface, were not freely available for trial, or were not compatible with more than one database.

Tools were tested for functionality and ease of use using a set of 900 records (drawn from major databases including EMBASE and Medline) exported from an existing Endnote library created as part of a previously published systematic review.

RESULTS: Of the eighteen potentially relevant tools identified for the scoping exercise, nine fulfilled the predefined inclusion criteria. On closer inspection, four were not suitable for our needs or not available for testing (DoCTER, Research Screener, RobotAnalyst, PICO Portal), and one (EPPI-Reviewer) had complex setup requirements. The four tools we did test (Abstrackr, ASReview LAB, Rayyan, SWIFT-ActiveScreener) proved functional and easy to use, easily importing the RIS test file, and providing an exportable relevance ranking based on initial/ongoing screening by a human reviewer.

CONCLUSIONS: These four tools will be subject to further testing to establish their relative performance in different types of review, determine their role within the review workflow, and assess whether the potential time savings are worth a possible associated loss of performance.

Conference/Value in Health Info

2022-11, ISPOR Europe 2022, Vienna, Austria

Value in Health, Volume 25, Issue 12S (December 2022)

Code

MSR91

Topic

Methodological & Statistical Research

Topic Subcategory

Artificial Intelligence, Machine Learning, Predictive Analytics

Disease

No Additional Disease & Conditions/Specialized Treatment Areas

Explore Related HEOR by Topic


Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×