Adjusted indirect comparisons (anchored via a common comparator) are an integral part of health technology assessment. These methods are challenged when differences between studies exist, including inclusion/exclusion criteria, outcome definitions, patient characteristics, as well as ensuring the choice of a common comparator.
Matching-adjusted indirect comparison (MAIC) can address these challenges, but the appropriate application of MAICs is uncertain. Examples include whether to match between individual-level data and aggregate-level data studies separately for treatment arms or to combine the arms, which matching algorithm should be used, and whether to include the control treatment outcome and/or covariates present in individual-level data.
Results from seven matching approaches applied to a continuous outcome in six simulated scenarios demonstrated that when no effect modifiers were present, the matching methods were equivalent to the unmatched Bucher approach. When effect modifiers were present, matching methods (regardless of approach) outperformed the Bucher method. Matching on arms separately produced more precise estimates compared with matching on total moments, and for certain scenarios, matching including the control treatment outcome did not produce the expected effect size. The entropy balancing approach was used to determine whether there were any notable advantages over the method proposed by Signorovitch et al. When unmeasured effect modifiers were present, no approach was able to estimate the true treatment effect.
Compared with the Bucher approach (no matching), the MAICs examined demonstrated more accurate estimates, but further research is required to understand these methods across an array of situations.