vos-headline-type-email-header-062620
From the Journals

Real-World Effectiveness in Oncology: Plotting a Path Forward

Predicting Real-World Effectiveness of Cancer Therapies Using Overall Survival and Progression-Free Survival From Clinical Trials: Empirical Evidence for the ASCO Value Framework

Value Health. 2017;20(7):866-875
Darius N. Lakdawalla, Jason Shafrin, Ningqi Hou, et al.

Section Editors: Soraya Azmi, MBBS, MPH, Beigene, USA; Agnes Benedict, MSc, MA, Evidera, Budapest, Hungary

One particularly appealing hope that many have had for real-world evidence is its potential ability to reduce our reliance on clinical trials in assessing new and innovative treatments. This optimism pinned on real-world data and evidence is true across all diseases and indications, yet there is much work to be done to achieve this end. The paper by Lakdawalla et al assists us in understanding the path forward, particularly in the field of oncology.


"Despite our reliance on trials, it is well recognized that trials are a unique and highly selective environment, leaving a need to understand how the drug will perform among an unselected, or at least, less selected group of patients that physicians face in their day-to-day practice. Real-world evidence is attractive for this reason."

 

Clinical trials, the gold standard for assessing safety and efficacy, are designed around endpoints or outcomes that are known for that disease or indication. In oncology, these are overall survival, progression-free survival, or time to progression. Despite our reliance on trials, it is well recognized that trials are a unique and highly selective environment, leaving a need to understand how the drug will perform among an unselected, or at least, less selected group of patients that physicians face in their day-to-day practice. Real-world evidence is attractive for this reason. The second higher goal that we may wonder about is whether real-world evidence endpoints can match up to gold standard clinical trial endpoints, and if they do, how reliably so. This is the work that was done by the authors.

The authors examined the relationship between randomized clinical trials, measured efficacy (overall survival, progression-free survival, and time to progression) against real-world overall survival. Real-world overall survival as reflected by real-world mortality hazard ratios was measured against randomized clinical trial overall survival or clinical trial survival surrogate endpoints. Surrogate endpoints considered were progression-free survival or time to progression. In their methodology, the authors described selecting clinical trials in identified cancer indications of interest (breast, colorectal, lung, ovarian, and pancreatic cancers) that reported overall survival, progression-free survival, and time-to-progression endpoints that could be compared against survival of patients in the real world using the SEER-Medicare database.

Trials selected had to have regimens with phase III pivotal trials reporting both overall survival and progression-free survival or time to progression, and regimens had to be approved by the FDA before 2009 so that patients in the real world had at least 2 years of survival data captured in the SEER-Medicare database. Through their selection process, 29 pivotal trials met the study’s inclusion criteria. Next, the authors selected patients from the SEER-Medicare database that met inclusion and exclusion criteria of each of the clinical trials according to the relevant diagnosis and real-world treatment regimen that matched the corresponding clinical trial regimens. Other inclusion criteria were also applied (eg, patients were required to have initiated cancer treatment within 90 days of diagnosis, patients could appear in the sample multiple times if they received more than 1 of the treatments of interest, and patients were assigned to treatment of comparator arm depending on their tumor and therapy received).


"This study showed that real-world overall survival endpoints can be usefully compared against clinical trial overall survival endpoints, but perhaps with surrogate endpoints there needs to be a “discount” factor built in."

The comparison between real-world endpoints and trial endpoints was carried out by assessing whether treatment efficacy derived from randomized clinical trials was able to predict the real-world overall survival. Cox proportional hazards regression analysis was used, with separate analyses performed to predict real-world overall survival using trial overall survival or trial surrogate endpoints. Sensitivity analysis was also performed to test the robustness of results. For example, in the main model, patients in the baseline cohort were limited to those who met the randomized clinical trials inclusion and exclusion criteria. The authors also examined a “full cohort” that were all patients receiving the relevant treatment in the SEER-Medicare database.

The results of the study found that after applying inclusion/exclusion criteria, there were 18,148 unique patients across 21 different randomized clinical trials divided among the 5 cancer indications of interest (8 trials were excluded because there were 10 or fewer patient observations). For example, in lung cancer, 12,146 patients met clinical trial inclusion/exclusion criteria. Their results showed that the real-world mortality hazard ratios were not different from those of randomized clinical trials, with the percentage difference being 0.6%, 95% CI -3.4-4.8%. On the other hand, the real-world mortality hazard ratios were significantly different from the randomized clinical trials surrogate endpoint surrogate hazard ratio (SHR) at about 16%, 95% CI 11%-20.5% (ie, significantly lower than what would be predicted by the randomized clinical trials). They further looked at how their assumptions compared against the ASCO value framework and found that there “was a large difference in only 4 out of 21 studies. In the others, the difference was either small or not statistically significant.”

This study showed that real-world overall survival endpoints can be usefully compared against clinical trial overall survival endpoints, but perhaps with surrogate endpoints there needs to be a “discount” factor built in. The paper could be a worthwhile read for anyone who is interested in how real-world evidence can be used to understand the nature of its own relationship to, and against, the gold standard of clinical trial endpoints. Although the application in this study was in oncology, the same conceptual framework could be applied to any other disease area.

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×