NUMBERS OR NOISE? INTERPRETING INTERNAL VALIDITY TESTS OF STATED-PREFERENCE DATA (Advanced Workshop)
Author(s)
Kevin Marsh, PhD, Evidera Ltd, London, UK; Kathryn O’Callaghan, PhD, U.S. Food and Drug Administration, Silver Spring, USA; Jui-Chen Yang, MEM, Duke Clinical Research Institute, Durham, USA
Presentation Documents
PURPOSE: To help participants interpret results of internal validity tests for evaluating the quality of stated-preference data.
DESCRIPTION: Recent guidance on incorporating patient-preference information in benefit-risk assessments from FDA’s Center for Devices and Radiological Health calls for checks on the “logical soundness” of stated-preference studies used to inform clinical, product-development, or regulatory decision making. FDA has sponsored development of a validity-test tool for discrete-choice experiment (DCE) data that is freely available to researchers. The tool provides summary results for various tests of internal consistency, stability, transitivity, and logic. However, it is not clear what choice patterns qualify as a validity-test failure, how to diagnose the reasons for apparent failures, and how such observations should be treated in estimating preference parameters. Kathryn O’Callaghan will introduce FDA’s interest in this topic and chair the workshop. Reed Johnson will summarize the results from an FDA-sponsored study that collected validity test results from 50 DCE datasets. Kevin Marsh will present an analysis of 14 DCE studies that included a dominated-pair test and discuss 3 possible explanations for apparent failures. Jui-Chen Yang will discuss a DCE study in which nearly 1/3 of respondents failed a dominated-pair test and evaluate the possible cognitive, preference, and statistical implications of the test failures.
Conference/Value in Health Info
2018-05, ISPOR 2018, Baltimore, MD, USA
Code
W9
Topic
Methodological & Statistical Research, Patient-Centered Research