vos-headline-type-email-header-062620
Q&A

What the Rise of Real-World Evidence Means for the Pharmaceutical Industry: A Closer Look

 

Now more than ever, there is a pressing need for real-world evidence to inform decision making in this COVID-19–affected world. In this month’s Q&A, Jennifer Graff, PharmD, Vice President, Comparative-Effectiveness Research at the National Pharmaceutical Council in Washington, DC, looks at how real-world evidence is being used to give direction to patients, providers, payers, and policy makers.


QA

VOS: Several individuals have offered opinions on what the rise of real-world evidence means for the pharmaceutical and vaccine industries.  What is your perspective?

 

Graff: The rise of real-world evidence is a positive step forward for patient care and patients. Too often, patients and consumers do not know what to expect over the course of their disease, how treatments work for patients who look like them, or what the optimal treatment sequence is. When we spoke with representatives from patient organizations, we heard clearly that patient representatives were surprised that real-world evidence studies were not already deeply embedded in clinical care decisions. They recognized that high-quality studies using real-world data—when done with high-quality data and good research methods—can fill gaps in knowledge and inform routine care and coverage decisions.

 

In the past few years, the conversation has shifted. There is a broader understanding that real-world evidence can complement, not compete with, randomized controlled trials. The time and costs to answer the questions we—patients, providers, payers, policy makers collectively—need to know are too prohibitive not to use all high-quality and trustworthy evidence.

 

“ Researchers have estimated that the use of real-world evidence could reduce trial costs between 5% to 50% to expedite safety monitoring and simplify trial and data collection.”

 

VOS:  Do you see specific areas where real-world evidence has been more broadly accepted as a part of the research and development paradigm? Why do you think this is the case?

 

Graff:  For the biopharmaceutical and vaccine industries, the rise of real-world evidence offers many opportunities to expand beyond the traditional use of real-world data for safety surveillance. Real-world data are used to identify potential drug or vaccine targets and pathways. Researchers have estimated that the use of real-world evidence could reduce trial costs between 5% to 50% to expedite safety monitoring and simplify trial and data collection. Real-world data are transforming clinical trial designs and accelerating trial recruitment to get new treatments to patients more quickly.

 

Within the clinical trial context, pragmatic studies combining randomization with real-world evidence sources have seen broader acceptance. For example, a pragmatic trial comparing paliperidone to traditional treatment among patients with schizophrenia and prior contact with the criminal system supported the product’s expanded indication. In oncology and rare disease development programs, historical control arms provide natural history comparisons for single-arm open-label studies. Once approved, value-based arrangements rely on quality real-world data to quantify treatment results and transform payment and reimbursement. While these benefits are significant for drug development, what is important to remember is that positive steps for patient care and patients are positive steps for the biopharmaceutical and vaccine industries.

 

VOS: Do you foresee any issues that could prevent its successful use?

 

Graff: There are multiple technical challenges with the collection, transformation, and evaluation of real-world evidence. However, we are learning that good data with thoughtful design and analysis yield similar results regardless of the sophisticated statistical manipulations.

 

The more intractable obstacles are the cultural and infrastructure challenges. Traditional research paradigms still exist in many research and development organizations. Clinical trials and real-world evidence are seen as separate, rather than complementary, designs. There are infrastructure challenges as end users cannot determine if the results of a real-world evidence study reflect a prespecified analysis or the most positive and impressive result. Finally, the demand for highly trained individuals to design and analyze high-quality real-world evidence studies exceeds the supply. These challenges can be overcome with education, tools, and training.

 

"Regulatory groups have shown willingness to use these real-world data to support product approval when traditional clinical trials would be difficult or unethical to conduct."

 

VOS:  You have seen the use of real-world evidence become more prominent over the past several years, including in the National Pharmaceutical Council’s own research. Despite its increase in prominence, there still appears to be a lack of urgency with respect to its broad adoption and application in the healthcare sector.

 

Graff: Yes, real-world evidence has become more prominent. Is the adoption and application as swift and consistent across all decision makers as it could be? No, but there is some movement. For example, real-world evidence is cited more frequently in coverage decisions by US commercial health plans. In 2017, real-world evidence comprised 10% of all cited studies. In 2019, citations grew to 16% of all studies. This increase may be due to new treatments for rare and orphan diseases, where information may be more limited and real-world evidence relied upon more often. However, health plans are also becoming more familiar with real-world data and real-world evidence through more sophisticated uses, such as predictive modeling and value-based agreements.

 

Another area where adoption and application are lacking is the consideration of external control arms versus real-world data studies across the board. External control arms compare the results from historical or concurrent real-world data to the results from typically open-label, single-arm studies. Regulatory groups have shown willingness to use these real-world data to support product approval when traditional clinical trials would be difficult or unethical to conduct. Yet, health technology assessors and reimbursement bodies have been less willing to consider the same information when assessing value or applying add-on payments for these new technologies.

 

VOS:  In recent years, a distinction has been drawn between regulatory grade real-world evidence and that used to support coverage decisions and guideline development in healthcare. Do you see this approach changing the threshold for the type of real-world evidence being used in coverage decisions and guideline development?

 

Graff: This is an important distinction and one we think about a lot. The US Food and Drug Administration (FDA) has been an important arbiter of truth. The agency’s use of high-quality real-world evidence could accelerate the adoption (or rejection) of real-world evidence by other stakeholders. As clinical guideline bodies and health plans must make hundreds of decisions a year, it could be easy to limit their use of real-world evidence to regulatory-grade evidence. For example, some health plans use journal tier as a proxy for study quality and have noted they only consider studies published in higher-impact or tier journals. But as we have seen in a recent systematic review, journal impact factor cannot be relied upon as a surrogate for study quality.

 

We also worry that very narrow use or very stringent requirements for regulatory-grade real-world evidence considered by the FDA will have implications for other stakeholders. There are opportunities for all federal healthcare programs—not just the FDA— to consider how real-world evidence could guide decision-making.

 

VOS: What do you see as the primary difference between the two approaches?

 

Graff: The key difference is the level of uncertainty each group (eg, regulators, clinical guideline bodies, health plans) is willing to tolerate. Regulatory decisions and the evidence underpinning these decisions have little room for uncertainty. For example, best correct-distance visual acuity is a meaningful endpoint for regulatory decisions but may be less relevant for health plans that are trying to slow vision loss. The endpoints often used in regulatory decisions are helpful but insufficient for coverage decisions.

 

Second, the FDA requires randomized controlled trials to meet certain data quality checks such as data completeness, confirmation, and provenance. For clinical trials, the study protocol and analysis plan are prespecified and shared to ensure the research methods are transparent. These elements are just as important for regulatory-grade real-world evidence. For reimbursement-grade real-world evidence, these studies should use high-quality data and have prespecified hypotheses and be transparent; they are likely to require fewer checks and balances than regulatory-grade real-world evidence.

 

Finally, the trial populations are often narrowly defined for regulatory studies. Regulatory-grade real-world evidence is likely to mimic hypothetical trials and will likely exclude the patient populations considered by clinical guidelines and coverage bodies. As we gain clarity on regulatory-grade real-world evidence, similar conversations are needed to define and develop reimbursement-grade real-world evidence.

 

VOS: Do you foresee a shift in how evidence hierarchies address real-world evidence in their criteria moving forward?

 

Graff: Basing evidence hierarchies on the decision to be made, rather than on the studies and study designs, is a laudable goal. However, it may be a step too far. A dynamic, rather than static, evidence hierarchy may be more feasible. In a dynamic hierarchy, studies move up or down based on the quality of the data and risk of bias. For example, in the GRADE system, real-world evidence studies start at a lower evidence level than randomized controlled trials, but real-world evidence studies with a low risk of bias may move up the evidence hierarchy. By contrast, randomized controlled trials begin at a higher level of evidence and are downgraded if there is a greater risk of bias. The dynamic approach lends itself away from “best evidence” to “best available evidence” and more informed decisions.

 

Evidence hierarchies currently allow groups to rely on study design alone, short-cutting the assessment of a study’s credibility or risk of bias. Even when groups shift towards “best available evidence,” they may use blunt assessment tools. For example, some assessment bodies only consider real-world evidence studies if they include certain outcomes or have a specific sample size. Over the past decade, the National Pharmaceutical Council, along with other groups, has developed tools such as the GRACE Checklist, the CER (comparative effectiveness research) Collaborative questionnaire, and other tools to help end users assess an individual study’s credibility and bias. Using the totality of evidence from lower- and higher-risk studies, rather than only a subset of individual studies, helps improve the certainty of the final recommendations.

 

VOS: Do you foresee real-world evidence driving greater levels of collaboration between stakeholders (healthcare providers, payers, and policy makers)?

 

Graff: Absolutely. Collaboration will extend beyond the traditional stakeholders, providers, payers, and policy makers. Activated patient groups are eager to contribute data if they have clarity around privacy and ownership of data and offer opportunities for supplemental real-world evidence endpoints. I also expect we will see more collaboration across the biopharmaceutical and vaccine industries as more pragmatic and adaptive study designs are initiated to ensure more efficient trials and adapt to new treatment combinations.

 

"Basing evidence hierarchies on the decision to be made, rather than on the studies and study designs, is a laudable goal. However, it may be a step too far."

 

VOS: If you were to project out 5 years, where do you feel the future of real-world evidence will be in the pharmaceutical and vaccine industries?

 

Graff: By 2025, the use of real-world evidence for approvals or supporting approvals should become less anecdotal and more routine. Regulatory-grade real-world evidence may be limited to certain disease contexts initially, but successful biopharmaceutical organizations will use real-world data to accelerate their product development across all therapeutic areas. Beyond the regulatory environment, I hope that the use of real-world evidence in clinical guideline development, and payment and coverage decisions will be less sporadic and more routine. This will require researchers to ensure that the quality of real-world evidence developed is based on reliable data, use credible methods, and be transparent in the process used.

 

Can this be accomplished in 5 years? That timing is aggressive. But we owe it to the patients who want to know what is most likely to work best based on their personal characteristics to try. •

 

Martin Marciniak, PhD, is the Section Editor for the Q&A column.

 

 

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×