NUANCES OF ASSESSING CLINICIAN AGREEMENT IN CLINICIAN REPORTED OUTCOMES (CLINROS)

Author(s)

Bender RH1, Lenderking WR2
1Evidera, Bethesda, MD, USA, 2Evidera, Waltham, MA, USA

Increased interest of pharmaceutical companies in using ClinROs for clinical trial endpoints and FDA submissions has resulted in renewed interest in the standards for their validation. ClinROs require some different approaches to validation when compared to Patient-Reported Outcomes (PROs). For example, ClinROs can be based on readings, ratings, or even performance, which adds variability to them and nuances to their validation. One of the key components of validating a ClinRO is establishing its reliability. For most ClinROs, this means inter- and intra-rater reliability. Although often considered straightforward, reliability assessment can be the most technically challenging part of the validation analysis. An overview will be given of the primary competing statistics for assessing reliability (Pearson r, kappa, etc.) including a brief rationale and some pros and cons for each. We will focus on the intraclass correlation (ICC) statistic as the most useful for rating scale data, discussing its many forms and how to correctly choose among them. In particular, we will consider the distinction between aiming to demonstrate consistency or agreement in choosing an ICC. We will also discuss key design considerations that can seriously impact ICCs and undermine the meaningfulness of the validation, for example sample size and how flexible to be around departures from the intended test-retest interval. Also included will be a discussion of framing or identifying ICC standards or criteria for ClinRO validation. Throughout, we will share noteworthy insights from our experience with actual assessments, illustrating how key elements of this whole discussion may be realized in actual ClinRO validation work in the FDA context.

Conference/Value in Health Info

2017-05, ISPOR 2017, Boston, MA, USA

Value in Health, Vol. 20, No. 5 (May 2017)

Code

PRM193

Topic

Methodological & Statistical Research

Topic Subcategory

Confounding, Selection Bias Correction, Causal Inference

Disease

Multiple Diseases

Explore Related HEOR by Topic


Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×