Real World Data

Sep 1, 2007, 00:00 AM
10.1111/j.1524-4733.2007.00277.x
https://www.valueinhealthjournal.com/article/S1098-3015(10)60468-8/fulltext
Section Title :
Section Order : 14
First Page :
An old story, favored by editors, describes a fishmonger’s sign that says “Fresh Fish Sold Here.” But the sign is verbose: “fresh” is unnecessary, because no fishmonger would sell fish that was anything but fresh. “Here” is also redundant. Where else but at the shop would the fish be for sale? “Sold” is not needed either, because no one would expect the fish to be given away. Whether “Fish” is needed is debatable, because, as the story goes, you can smell it a mile away.

The report in this issue by the ISPOR Task Force on Real World Data brought this story to mind. The Task Force was asked by ISPOR “to develop a framework to assist health care decision-makers in dealing with ‘real world’ data and information in ‘real world’ health care decision-making, especially related to coverage and payment decisions.” In their deliberations they wrestled with the definition of “real world data,” as well they might. In the discussion about what data to use, the phrase “real world” is meant to signify a particular category of data. The word “real” is unnecessary to this phrase, as any health researcher would obviously be focused on reality rather than fantasy. So should data be “world” data, or not? What is the alternative?

The Task Force decided to define “real world data” as “data used for decision making that are not collected in conventional randomized controlled trials.” Why should it even be necessary to make a case for using data outside of trials? Apparently it is necessary because trials have assumed such a lofty status in biomedical research. It is not without some reason that randomized trials are often viewed as the pinnacle of research endeavor. They are extremely powerful tools for addressing specific scientific questions. Random assignment makes it possible to eliminate alternative explanations for observed associations that may be extremely hard to rule out using “real world data.” In addition, the rigorous protocol routinely applied in randomized trials reduces variability from  clinical indications, dosing schedules, adjunct therapies, and many other sources, all of which can undermine the interpretation of findings in settings that lack such rigor in who is studied and how their disease is treated.

For these reasons, trials are rightly viewed as one of the strongest research tools available to biomedical researchers. But we must be careful not to exaggerate the role that randomized trials play in the accumulation of knowledge generally. Some consider the randomized trial to be a gold standard for research that is requisite for contributing solid knowledge, arguing that proof of a hypothesis can only be established from a randomized trial. This view is too extreme, fostering confusion about how science  works.Watson and Crick did not need randomized trials to infer the structure of DNA, nor did John Snow need a trial to demonstrate that drinking water contaminated with sewage was strikingly associated with cholera occurrence. Furthermore, how can trials constitute “proof” when several trials of the same intervention can result in divergent results? If trials really provided proof, one trial for every treatment would suffice.

The reality is that trials can be controversial, mutually contradictory, or irreproducible. Further, philosophers have agreed for centuries that no empirical hypothesis can be proven, in the sense of logical certainty, by any experiment or any observation. The inability to achieve certain knowledge, however, is no barrier to the accumulation of knowledge. Although trials can be instrumental to studying many phenomena, progress in science does not depend absolutely on the ability to conduct randomized trials.  Indeed, without ever resorting to a single randomized trial, scientists have developed a rich body of knowledge in areas as diverse as plate tectonics, the evolution of species, astronomy, and the effects of cigarette smoking on human health. Thus, although randomized trials are powerful tools, they should be viewed as merely one very useful technique for framing observations intended to address a particular question.

Trials are often designed to evaluate treatment efficacy in a narrow setting. The narrowness of that setting is often considered a liability for a trial, because it limits generalizability of the results. To achieve greater generalizability, many studies are designed to have study populations that are broadly representative of target populations, as in population surveys. Representativeness in a trial, however, conflicts with the aim of narrowing the range of variables that might affect the study result, a goal in experimentation that supersedes representativeness. Just as mouse researchers would prefer to have identical mice in their experiments than to have mice that are representative of all mice, trials are scientifically stronger with homogeneous patient populations rather than broadly representative study populations. The reason is that the internal comparisons of the experimental study are more important than an attempt to generalize the study results through statistical inference. The whole point of  experimental science is to narrow the range of influences and zone in on what happens in highly controlled and delineated circumstances.

Furthermore, it is a futile hope that simply by studying a broad range of subjects, one then can apply to the results to people with that broad range of characteristics. Instead, with a mix of participants, one gets a mix of results. If the study findings vary across subgroups, a representative study population tells nothing about that variation; it merely gives the average across those groups. If knowing that average is the goal of the study, representativeness of study subjects with respect to a given variable may be desirable. If, instead, the goal is to assess the efficacy of an intervention, the first hurdle may be to learn what the intervention can do in those who might benefit the most from it. Learning the average effect across a broader spectrum of patients might be a reasonable goal for future study. Thus, an ideal trial of a new intervention is usually better designed with less, rather than more representativeness, to reduce confounding by some risk factors and to focus the study on a patient population that might benefit most.

Unless a trial is designed from the start to evaluate coverage and payment, it is unlikely to provide ideal data on these facets of intervention to inform broader policy or business decisions. Typically a trial is of only marginal use to evaluate questions that are not closely related to the study aims. In most instances, researchers studying diverse health outcomes must look beyond trials and trial data. Outside of trials one finds what the Task Force has termed “real world data.” Studies drawn from the everyday experience of patients who are members of broadly defined populations face serious challenges. But if astronomers can look to the stars and learn about the universe and do so without conducting any randomized trials, surely health researchers can contribute to knowledge using data beyond those from randomized trials. Fortunately, with sufficient care, such data can be organized into sound studies that yield solid inferences. The report on Real World Data in this issue provides a useful summary of the opportunities, difficulties and intricacies of moving beyond trial data to study health outcomes.
https://www.valueinhealthjournal.com/action/showCitFormats?pii=S1098-3015(10)60468-8&doi=10.1111/j.1524-4733.2007.00277.x
HEOR Topics :
  • Decision & Deliberative Processes
  • Health Policy & Regulatory
  • Health Technology Assessment
  • Reimbursement & Access Policy
Tags :
Regions :
  • Global