Model replication is important because it enables researchers to check research integrity and transparency and, potentially, to inform the model conceptualization process when developing a new or updated model.
The aim of this study was to evaluate the replicability of published decision analytic models and to identify the barriers and facilitators to replication.
Replication attempts of 5 published economic modeling studies were made. The replications were conducted using only publicly available information within the manuscripts and supplementary materials. The replicator attempted to reproduce the key results detailed in the paper, for example, the total cost, total outcomes, and if applicable, incremental cost-effectiveness ratio reported. Although a replication attempt was not explicitly defined as a success or failure, the replicated results were compared for percentage difference to the original results.
In conducting the replication attempts, common barriers and facilitators emerged. For most case studies, the replicator needed to make additional assumptions when recreating the model. This was often exacerbated by conflicting information being presented in the text and the tables. Across the case studies, the variation between original and replicated results ranged from −4.54% to 108.00% for costs and −3.81% to 0.40% for outcomes.
This study demonstrates that although models may appear to be comprehensively reported, it is often not enough to facilitate a precise replication. Further work is needed to understand how to improve model transparency and in turn increase the chances of replication, thus ensuring future usability.
Emma McManus David Turner Ewan Gray Haseeb Khawar Toochukwu Okoli Tracey Sach