Drop the “M”: Minimally Important Difference and Change Are Not Independent Properties of an Instrument and Cannot Be Determined as a Single Value Using Statistical Methods
Abstract
Objectives
Patient-reported
outcome (PRO) instruments typically give a score on a scale, making it
difficult to know whether a given difference between an experimental
treatment and control in a clinical trial is large enough to warrant use
of that treatment. The minimally important difference (MID) is used for
designing and interpreting clinical research. We aim to explore the
rationale and statistical underpinnings of the idea that MID can be
defined as an inherent property of a particular PRO instrument.
Methods
We undertook a narrative review of the empirical and methodologic literature on MIDs.
Results
Both
methods of estimating MID—anchor or distribution based—are, at best,
highly questionable. Anchor-based methods are problematic because
patients may experience changes in health that are poorly captured by a
general anchor question about whether health is better, worse, or about
the same; distribution-based methods are conditioned on sample-dependent
variability of an instrument, and there is no clear rationale as to why
the relevance of a specific patient’s change in health can be
meaningfully referenced to some prior sample’s score dispersion.
Moreover, the degree of change we would require on a given scale is
higher for a treatment that is costly, invasive, unpleasant, or
associated with side effects than it is for a safe, well-tolerated,
cheap, and convenient alternative or one that is associated with other
benefits.
Conclusions
MID
must be estimated within a specific study context. It is best to think
of PRO measures in terms of “ID” and leave the “M” to case-by-case,
context-based interpretation.
Authors
Andrew Vickers Kyle Nolla David Cella