The (Harsh) Reality of Real-World Data External Comparators for Health Technology Assessment
Abstract
We read with great interest the recently published article by Patel et al1 and commend the authors for their tremendous efforts to tackle this challenging and important research question. We recognize and appreciate that the authors correctly note that the findings from their study “…are intended to provide a description of trends in use of EC data in support HTA decisions, and not to provide a causal link between the use of RWD or EC data and the probability of a successful submission.” Nevertheless, we believe that despite this caveat, a number of the findings from Patel et al1 may be placed out of context and risk providing readers with causal interpretations about the impact that external comparators (ECs) may have on health technology assessment (HTA) recommendations.
Their finding that 59% of submissions (51 of 87) based on single-arm trials that used real-world data (RWD) ECs received positive decisions, a greater proportion than submissions with trial-based EC data (49%) or no EC data (43%), was notable. Partnered with their statement that they found “…that adding RWD ECs resulted in a higher acceptance rate than observed when prior trial ECs or no EC data at all were used…” and a discussion section titled “Impact of ECs on HTA Decision Making,” the article may imply to the casual reader that, when faced with single-arm trials, ECs based on RWD represent an optimal and well-accepted solution for providing comparative evidence on single-arm trials to HTA bodies.
Authors
Oliver Cox Cormac Sammon Alex Simpson Radek Wasiak Sreeram Ramagopalan Kristian Thorlund