R Packages for Matching-Adjusted Indirect Comparison: Differences in Usability and Identical Results From Reuse of Prior Code

Abstract

Objectives

To evaluate the consistency of results and usability of R packages implementing matching-adjusted indirect comparisons (MAICs).

Methods

A data set from a previously published simulation study was used to perform unanchored MAICs. Four R packages—“maic,” “MAIC,” “maicplus,” and “maicChecks”—were identified and used to calculate weights for 8 scenarios using different types of summary statistics (including means, medians, and proportions). Comparisons were made regarding effective sample sizes (ESSs), weighted outcomes, and usability. An exploratory analysis tested the impact of changing the optimization algorithm within source code.

Results

All packages used the National Institute of Health and Care Excellence technical support document 18 code and therefore produced identical results for weights, ESSs, and outcomes. The “maicChecks” package offered an alternative weight calculation method, leading to higher ESSs and slightly different outcomes. Usability varied, with only the “maic” package natively supporting aggregate-level data medians. Exploratory analysis revealed that modifying the optimization algorithm in the Technical Support Document 18 code could identify alternative weights, further highlighting the potential variability in results.

Conclusions

Current R packages for MAIC largely rely on a single implementation of weight calculation, which has important implications for health technology assessments. Differences in weight calculation methods and optimization routines can and will lead to differences in point estimates and uncertainty bounds. Comparative studies are needed of both existing and novel approaches.

Authors

Kurt Taylor Anthony J. Hatswell

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×