Using the Potential Outcomes Model to Inform the Design and Interpretation of Causal Studies of the Effectiveness and Safety of Healthcare Interventions: An Interdisciplinary Perspective
Author(s)
Katherine M. Harris, PhD, Michael Grabner, PhD, Shelly-Ann Love, Ph.D., Ruth Wangia Dixon, PhD, Sarah Hoffman, Ph.D., Anna Wentz, Ph.D..
Carelon Research, Wilmington, DE, USA.
Carelon Research, Wilmington, DE, USA.
OBJECTIVES: To use the potential outcomes model (POM) to compare features of three causal study designs: randomized controlled trials (RCTs), quasi-experiments, and observational studies.
METHODS: We used an unstructured approach to gather information about the origins, features, and uses of the POM to measure the effectiveness and safety of healthcare interventions in the biostatistics, economics, and epidemiology literatures. We prioritized introductory presentations over advanced mathematical and/or statistical concepts. We described components of the POM, assumptions required to make causal inferences without randomization and provided a side-by-side comparison of key features, strengths, and limitations of three designs.
RESULTS: The POM is an arithmetic expression that uses the concept of counterfactual treatment assignment to decompose the average differences in outcomes for treated and untreated individuals into three components: (1) the average treatment effect, i.e., the difference in expected outcomes when all individuals are treated versus untreated; (2) the difference in expected outcomes for treated and untreated groups in the absence of treatment (often called “selection bias”); and (3)the difference in expected outcomes of treatment in treated and untreated individuals (often called “selection on treatment effects”). RCTs prevent selection bias on treatment effects by randomly assigning individuals to treatment or control groups. The result is unbiased causal estimates, but these estimates may not reflect real-world clinical practice or be generalizable. Quasi-experiments and observational studies can improve generalizability and realism, but require subject matter expertise, complex models, and assumptions to address sources of bias. Disciplines often use different terms for underlying POM assumptions and potential biases, which can complicate understanding.
CONCLUSIONS: Understanding the POM can illuminate similarities and differences in the design and implementation of causal studies across disciplines, and their associated trade-offs in accuracy, realism, and generalizability. This understanding can inform the selection of study designs that improve reliability of causal estimates and robustness to underlying assumptions.
METHODS: We used an unstructured approach to gather information about the origins, features, and uses of the POM to measure the effectiveness and safety of healthcare interventions in the biostatistics, economics, and epidemiology literatures. We prioritized introductory presentations over advanced mathematical and/or statistical concepts. We described components of the POM, assumptions required to make causal inferences without randomization and provided a side-by-side comparison of key features, strengths, and limitations of three designs.
RESULTS: The POM is an arithmetic expression that uses the concept of counterfactual treatment assignment to decompose the average differences in outcomes for treated and untreated individuals into three components: (1) the average treatment effect, i.e., the difference in expected outcomes when all individuals are treated versus untreated; (2) the difference in expected outcomes for treated and untreated groups in the absence of treatment (often called “selection bias”); and (3)the difference in expected outcomes of treatment in treated and untreated individuals (often called “selection on treatment effects”). RCTs prevent selection bias on treatment effects by randomly assigning individuals to treatment or control groups. The result is unbiased causal estimates, but these estimates may not reflect real-world clinical practice or be generalizable. Quasi-experiments and observational studies can improve generalizability and realism, but require subject matter expertise, complex models, and assumptions to address sources of bias. Disciplines often use different terms for underlying POM assumptions and potential biases, which can complicate understanding.
CONCLUSIONS: Understanding the POM can illuminate similarities and differences in the design and implementation of causal studies across disciplines, and their associated trade-offs in accuracy, realism, and generalizability. This understanding can inform the selection of study designs that improve reliability of causal estimates and robustness to underlying assumptions.
Conference/Value in Health Info
2025-05, ISPOR 2025, Montréal, Quebec, CA
Value in Health, Volume 28, Issue S1
Code
MSR47
Topic
Methodological & Statistical Research
Topic Subcategory
Confounding, Selection Bias Correction, Causal Inference
Disease
No Additional Disease & Conditions/Specialized Treatment Areas