vos-headline-type-email-header-062620
Featured

Unlocking the Promise of Real-World Evidence

Researchers today are faced with an ever-growing wealth of real-world data. While long used for safety surveillance, recent mandates in the 21st Century Cures Act and Prescription Drug User Fee Act are accelerating the use of real-world evidence in regulatory decisions, including secondary indications for already approved drugs. Today, real-world-evidence–based insights are driving not only regulatory decisions, but also reimbursement decisions.

This year, the COVID-19 pandemic has further fueled real-world evidence analyses. Yet, whether these real-world-evidence–based decisions are reliable or clinically accurate remains unclear.

William H. Crown, PhD; Lucinda Orsini, DPM, MPH; Nirosha Mahendraratnam Lederer, PhD; Shirley Wang, PhD; and Diana Brixner, RPh, PhD shared their thoughts on the use of real-world evidence for both regulatory and assessment purposes, and discussed current challenges, future opportunities, and whether real-world evidence is truly achieving its goal—helping to ensure greater patient access to more effective treatments.

Limits of Randomized Clinical Trials

William H. Crown, PhD, Distinguished Research Scientist at The Heller School for Social Policy and Management at Brandeis University in Waltham, Massachusetts, USA, began the discussion by revisiting some intrinsic limitations of randomized clinical trials. “The fact that something is randomized doesn’t necessarily give you the right answer if it isn’t a large enough trial and well designed.” For Crown, real-world evidence can augment and facilitate randomized clinical trials. “I think there are many cases where we’re doing randomized clinical trials now, when we could actually conduct quasi-experimental design-type studies with real-world data and achieve the same thing much more quickly and at lower cost.”

“Because drugs are frequently prescribed off-label, the data often exist in claims databases and electronic medical record data,” said Crown. “Companies can basically simulate the trial and these sorts of quasi-experimental design studies can generate very similar estimates to the randomized trials,” he explained. He highlighted cardiovascular disorders and diabetes as conditions where real-world evidence looks particularly promising.

 

Expanding Regulatory Support for Real-World Evidence

Incentivized by market changes and legislation like the 21st Century Cures Act, pharmaceutical companies are expanding their use of real-world evidence to test secondary indications for already approved drugs and to conduct ongoing safety surveillance.

Crown also emphasized the value of real-world evidence in single-arm trials, particularly in rare conditions when insufficient numbers of patients impede randomization to a comparator group. “There’s a lot of interest in so-called ‘external comparative trials’ using data drawn from databases to find a comparison group of persons similar to those being treated in one-arm trials.”

 

“My mantra is you have to use the right data to answer the right question. If you don’t understand what the data are telling you, you could get some misinformation and potentially some harmful decision making, as is being seen now with COVID-19.”
—Nirosha Mahendraratnam Lederer, PhD

 

Insight Into Underrepresented Groups

Crown also heralded these real-world evidence trials as an important tool for examining treatment effects in diverse patient populations (ie, groups often underrepresented in clinical trials). Regulatory trials often focus on narrowly defined subgroups to amplify the precision of estimated treatment effects. But narrow inclusion and exclusion criteria limit generalizability. “In actual practice, these drugs are used in broader patient populations,” said Crown.

He emphasized the importance of target trials, which, by running the trial within a real-world database, allow for the analysis of treatment outcomes within specified subgroups. Not only can these trials be conducted quickly, they can examine treatment effects across different sociodemographic groups (ie, by race, ethnicity, gender, and geography), bringing critical insight into our understanding of treatment effects in these often underrepresented groups.

 

Growing Acceptance by Regulators and Assessment Bodies

Lucinda Orsini, DPM, MPH, Associate Chief Science Officer at ISPOR in Lawrenceville, NJ, USA, reflected on the increasing acceptance of real-world evidence by both regulatory and assessment bodies. “I think regulators are always on the tip of the spear. They are the first ones to see that, with a rare disease or an area where there are very few treatments, companies are trying to bring these options forward as quickly as they can.”

Such cases, Orsini noted, have driven regulators to adopt a more flexible stance on real-world evidence. “Regulators are starting to see that more data are better than less data, even if the data aren’t what they would call ‘perfect, clinical trial, phase III data.’”

 

COVID-19 Spurring Real-World Evidence Acceptance

But under the COVID-19 pandemic, Orsini sees many assessment bodies becoming more receptive to real-world evidence. She used the United States as an example, where many legislative bodies are mandating payers to cover COVID-19–related diagnostic testing, treatments, and healthcare services. In these cases, effectiveness information is limited, leaving payers to ask whether they can conduct assessments given the lack of clinical information.

Orsini sees an opportunity for the industry and payers to work together to enact more reasonable usage agreements when faced with such limited product information. “I think COVID-19 brought outcomes-based contracting even more to the fore,” Orsini said. Payers have their own data to conduct their own data analyses of treatment outcomes within their own patients. “However, the manufacturers are going to want to look under the hood and see how that’s calculated,” she added.

 

Need for Greater Transparency

To ensure effective partnerships, Orsini stressed the need for greater transparency. “Everyone must understand where that data can be useful and where it’s not so useful. I think that’s the kind of transparency we need.” She added, “Unless we understand how data sets are pulled and put together, we just don’t really know what we’re getting into and why.”

But while transparency of methods and analysis is critical, she warned we must also understand how the data were collected and where they came from.

“Transparency can lead to more of an informed interpretation about what these data really can and can’t do.” Orsini said, “It’s a continuum, and you have to put the data into context in the question that you have at hand to see how it might be able to help you. It’s probably not the panacea, but it can’t be completely discounted either.”

To address this need for transparency, Orsini proposed opening dialogue with end users of the data, letting users “follow the breadcrumbs all the way through your process, to the results, and then figuring out better ways to communicate about the study design and what the results could mean.”

 

Multifaceted Nature of Transparency

Nirosha Mahendraratnam Lederer, PhD, Managing Associate at the Duke-Margolis Center for Health Policy, Durham, NC, USA, echoed this call for greater transparency. “I think transparency is key. The more up-front you are with what you plan on doing with the data, the more it builds trust in the studies.”

To aid transparency, she proposes routine prespecification and registration of real-world evidence study protocols. However, she clarified that these study protocols support transparency of the analysis, noting that “data curation transparency is something quite different.”

 

“It’s really that payers are interested in all of the kinds of measures that you have in real-world data—the actual cost and avoided healthcare utilization, hospitalizations, longer-term outcomes—outcomes that you typically have difficulty measuring in trial.”
—Diana Brixner, PhD, RPh

 

A Call for Data Curation Transparency

Lederer noted that while many researchers may already employ high-quality curation practices, problems remain from an evaluation standpoint because of poor documentation as well as a lack of universal standards for data curation and measures of fitness. These need to become not only more transparent, but accessible.

 To address this, her group proposes guiding checklists. “We are aiming for the development of a minimum standard list of fitness-for-use checks, focusing first on reliability. We should be concerned that people might keep cutting data in different ways to possibly get an answer they want,” she warned. “I think as a best practice, you should prespecify and justify what curation practices you plan on using. That being said, we often learn lessons along the way that may require changing our original plan. That’s okay, but it should be documented, and again, justified.”

 

Lessons From COVID-19

Lederer also discussed how COVID-19 has accelerated decision makers’ use and understanding of real-world evidence. “In the context of COVID-19, we’ve had to rely on real-world data and real-world evidence because that’s all we had.” However, she warned that the demand for real-world evidence could lead researchers into “challenging situations when they try to force a data set or when data aren’t reliable.”

“My mantra is you have to use the right data to answer the right question. If you don’t understand what the data are telling you, you could get some misinformation and potentially some harmful decision making.” However, she remains optimistic as there is unprecedented collaboration in the real-world evidence community to fight COVID-19. There is so much sharing not only of lessons, but also even code to improve both data quality and analysis methods to generate better real-world evidence.

 

“Transparency can lead to more of an informed interpretation about what these data really can and can’t do.”
—Lucinda S. Orsini, DPM, MPH

Improving the Real-World Evidence Ecosystem

In response to the expanded use of real-world evidence, Lederer and her colleagues identified significant lessons learned from the current COVID-19 pandemic. “We’re really thinking about how you advance the real-world data ecosystem. We’re looking too at incentives to improve data collection at the point of entry (eg, electronic medical records), while improving curation at the back end.” This, she feels, could improve data efficiency and alignment.

Lederer also emphasized the need for the right evaluators and reviewers for these studies, suggesting evaluation criteria to guide reviews. “The role of real-world evidence is different for new products versus products already on the market (eg, repurposed therapies). And we want to make sure that people with the right skillset are evaluating that research.

 “We’re also thinking about novel data sources. How can we
use patient-generated health data to complement our traditional real-world data sources? What are lessons learned related to outcomes and end points with remote patient monitoring?” Lederer closed by saying, “Even though we’re learning about digital tools in the clinical trial setting through decentralized trials, digital tools are frequently used in the real-world setting. And if we are learning how digital tools are capturing outcomes of interest in the trial setting, that might open up the use of these tools and validation of these tools in the real-world setting.”

 

The Value of Replication

Shirley Wang, PhD, Assistant Professor of Medicine at Harvard Medical School, Cambridge, MA, USA, spoke of the value of replication. “I think one of the strengths of real-world evidence is that increasingly these data sources are accessible to multiple investigators who can verify replicability and the robustness of the decisions, as opposed to primary data collection, which is a lot harder to replicate.”

Wang is part of the team leading the REPEAT Initiative, a large-scale replication project based within the Division of Pharmacoepidemiology and Pharmacoeconomics at Harvard University. REPEAT aims to independently replicate a random sample of 150 peer-reviewed real-world evidence studies. This project is part of a wider movement across many scientific disciplines (eg, psychology, economics, bench sciences) to replicate prior research findings.

Wang shared that this movement has fueled a “replication crisis,” driving researchers to examine what can be changed within their research culture to improve the reproducibility of research findings. She emphasized that her team was measuring replicability, not study validity. “They’re different, but related. Replicability can make it easier for you to assess validity because you understand what was done, but it does not equal validity.” She continued, “We want validity and replicability helps us get there.”

 

Strong Correlations Found

Using the prior work of the ISPOR/ISPE joint taskforce, Wang and her colleagues established a checklist of specific parameters they deemed necessary to facilitate reproducibility and assess validity. For a subset of 150 studies, Wang and her colleagues licensed access to the same databases, using the same years of data, and the same methodologies. And Wang emphasized that study results had been redacted, so her team could attempt replication without knowing the actual results.

 As reported by Wang, the team found a strong correlation between the original effect size and the replication effect size (correlation coefficient = 0.8). “If you look at the relative magnitude of the original effect size compared to the replication effect size, the relative magnitude is the median that we use to indicate that we’ve hit it spot on.” However, she noted that there is a substantial subset of studies for which the team was not able to replicate its findings, despite using the same source data and the same methods.

 

Need for Better Documentation

Wang did highlight some documentation challenges to their study replication project, namely, how choices are made to generate the evidence. “We need all of that information in order to truly understand, do we agree with the choices that you’re making, does it raise any validity concerns? What are the choices that are being made in order to generate the evidence?” To aid in communication, Wang recommends adding design diagrams as a high-level summary of temporal windows in the design of a study.

 

Payer Perspectives of Real-World Evidence

Finally, Diana Brixner, PhD, RPh, Professor in the Department of Pharmacotherapy at the University of Utah College of Pharmacy, Salt Lake City, Utah, USA, conveyed thoughts from the payer perspective. In her view, the importance of real-world evidence research to payers has grown with its use in reimbursement decisions. “It’s really that payers are interested in all of the kinds of measures that you have in real-world data—the actual cost and avoided healthcare utilization, hospitalizations, longer-term outcomes–outcomes that you typically have difficulty measuring in trial.”

 Brixner emphasized that payers also want to see trial research validated against real-world evidence, given that increasingly expensive drugs are coming to market with less and less data—a significant issue with oncology, gene therapy, and other specialty drugs. Because the FDA has accelerated patient access to these drugs by lowering barriers to market entry, fewer clinical data exist to make reimbursement decisions upon launch.

To Brixner, real-world evidence could help resolve this issue. “I think real-world studies need to be taking place in order to validate clinical trial results and to support reimbursement decisions.

 “The expectation has been that that industry is coming out and describing their potential new product: where they think the target population would be, what the benefit would be, what the potential price might be.” But Brixner noted that while health plans may be willing to reimburse for a given indication initially, future reimbursement would ideally hinge on real-world evidence studies within the health plan’s population. “Validate what you said it was going to do for our populations based on your clinical trials.”

 But according to Brixner, payers are having difficulty “holding the line” of hinging future reimbursement on real-world evidence. “There’s a real struggle getting access to validated studies in a timely manner for reimbursement decisions,” she said. Health plans are facing staffing and data-quality challenges to adequately validate prelaunch claims, while manufacturers see no real incentives to support real-world evidence studies.

 Will payers actually discontinue reimbursement due to insufficient real-world evidence? In Brixner’s view, industry feels that avoiding these real-world evidence studies may be a gamble worth taking, given that payers are unlikely to no longer cover their products. “That is the sort of balance we exist in right now. How do we move from this point?”

 Brixner suggested a possible solution in value-based pricing. “In the United States in particular, a lot of the pricing is driven by these rebate schemes.” She continued, “I think the model needs to change. And I think that the model needs to start being this value-based contracting driven by performance-based research agreements, where payers, researchers, and manufacturers collaborate together in everyone’s best interest. And right now, that’s not happening.”

About the Author

Michele Cleary is a HEOR writer in Minneapolis, MN.

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×