Patterns in Attribute Selection Reporting in Patient Preference Studies: A Systematic Review

Speaker(s)

Hall R1, Wong J2, Chua GN3, Lo SH3
1Acaster Lloyd Consulting Ltd, London, LON, UK, 2University College London, London, UK, 3Acaster Lloyd Consulting Ltd, London, London, UK

BACKGROUND: Discrete choice experiments (DCEs) are a popular quantitative method used to estimate the importance of any given attribute relative to the other included attributes. Therefore, the choice of attributes and associated levels ultimately determines the validity of results. However, a lack of transparency and detail in reporting of attribute selection can make risk of bias assessment challenging.

Barriers to transparent reporting include the current absence of consolidated best-practice guidance on reporting of attribute selection and word limits imposed by publishing journals. More recently, authors have sought to overcome this challenge by publishing attribute development processes, piloting or study protocols as standalone manuscripts.

OBJECTIVES: To summarise attribute selection reporting patterns in studies providing an in-depth description of attribute selection for DCE studies estimating patient preferences towards healthcare interventions.

METHODS: A systematic review of Embase and Medline (Ovid) combining terms relating to DCE and attribute selection was conducted in May 2024. Narrative synthesis was used to summarise attribute selection reporting patterns.

RESULTS: Thirty-one studies were included. Methods of attribute selection included literature reviews (n=25), qualitative research with the target population (n=27), expert consultation (n=17), and quantitative prioritisation exercises (n=18). Attribute selection reporting patterns varied across studies and covered four broad areas; identification of candidate attributes, shortlisting of attributes for inclusion, refinement of attribute wording and piloting. Attribute selection methods were often well-described, however, decision-making processes such as operationalisation of decision-making criteria (n=12), aggregation of evidence from multi-method approaches (n=14) and composition of the decision-making team (n=16) were less frequently reported.

CONCLUSIONS: There is currently a large variation in transparency and reporting of attribute selection in DCE publications. The findings from this review could be used as a first step in generating a consolidated attribute selection reporting checklist that could help standardize current practices making risk of bias assessments easier.

Code

PCR180

Topic

Patient-Centered Research

Topic Subcategory

Stated Preference & Patient Satisfaction

Disease

No Additional Disease & Conditions/Specialized Treatment Areas