Published Oct 2012
Berger ML, Dreyer N, Anderson F, Towse A, Sedrakyan A, Normand SL. Prospective observational studies to assess comparative effectiveness: the ISPOR Good Research Practices Task Force Report. Value Health. 2012;15(2):217-230.
Objective: In both the United States and Europe there has been an
increased interest in using comparative effectiveness research of interventions
to inform health policy decisions. Prospective observational
studies will undoubtedly be conducted with increased frequency to
assess the comparative effectiveness of different treatments, including
as a tool for “coverage with evidence development,” “risk-sharing contracting,”
or key element in a “learning health-care system.” The principle
alternatives for comparative effectiveness research include retrospective
observational studies, prospective observational studies,
randomized clinical trials, and naturalistic (“pragmatic”) randomized
Methods: This report details the recommendations of a Good Research Practice Task Force on Prospective Observational Studies for comparative effectiveness research. Key issues discussed include how to decide when to do a prospective observational study in light of its advantages and disadvantages with respect to alternatives, and the report summarizes the challenges and approaches to the appropriate design, analysis, and execution of prospective observational studies to make them most valuable and relevant to health-care decision makers.
Recommendations: The task force emphasizes the need for precision and clarity in specifying the key policy questions to be addressed and that studies should be designed with a goal of drawing causal inferences whenever possible. If a study is being performed to support a policy decision, then it should be designed as hypothesis testing—this requires drafting a protocol as if subjects were to be randomized and that investigators clearly state the purpose or main hypotheses, define the treatment groups and outcomes, identify all measured and unmeasured confounders, and specify the primary analyses and required sample size.
Separate from analytic and statistical approaches, study design choices may strengthen the ability to address potential biases and confounding in prospective observational studies. The use of inception cohorts, new user designs, multiple comparator groups, matching designs, and assessment of outcomes thought not to be impacted by the therapies being compared are several strategies that should be given strong consideration recognizing that there may be feasibility constraints. The reasoning behind all study design and analytic choices should be transparent and explained in study protocol.
Execution of prospective observational studies is as important as their design and analysis in ensuring that results are valuable and relevant, especially capturing the target population of interest, having reasonably complete and nondifferential follow-up. Similar to the concept of the importance of declaring a prespecified hypothesis, we believe that the credibility of many prospective observational studies would be enhanced by their registration on appropriate publicly accessible sites (e.g., clinicaltrials. gov and encepp.eu) in advance of their execution.
Keywords: comparative effectiveness, prospective observational studies.
Copyright © 2017, International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc.
Full ContentLog In to View Report
- Good research practices for comparative effectiveness research – defining, reporting & interpreting - Task Force Report Part I
- Good research practices for comparative effectiveness research - bias & confounding in the design - Task Force Report Part II
- Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources - Task Force Report Part III