July 2019


Uncertainty and Coverage With Evidence Development: Does Practice Meet Theory? 

Pouwels XGLV, Grutters JPC, Bindels J, Ramaekers BLT, Joore MA.
Value in Health. 2019;22(7):799-807.

OBJECTIVE
In theory, a successful coverage with evidence development (CED) scheme is one that addresses the most important uncertainties in a given assessment. We investigated the following: (1) which uncertainties were present during the initial assessment of 3 Dutch CED cases, (2) how these uncertainties were integrated in the initial assessments, (3) whether CED research plans included the identified uncertainties, and (4) issues with managing uncertainty in CED research and ways forward from these issues.

METHODS
Three CED initial assessment dossiers were analyzed and 16 stakeholders were interviewed. Uncertainties were identified in interviews and dossiers and were categorized in different causes: unavailability, indirectness, and imprecision of evidence. Identified uncertainties could be mentioned, described, and explored. Issues and ways forward to address uncertainty in CED schemes were discussed during the interviews.

RESULTS
Forty-two uncertainties were identified. Thirteen (31%) were caused by unavailability, 17 (40%) by indirectness, and 12 (29%) by imprecision. Thirty-four uncertainties (81%) were only mentioned, 19 (45%) were described, and the impact of 3 (7%) uncertainties on the results was explored in the assessment dossiers. Seventeen uncertainties (40%) were included in the CED research plans. According to stakeholders, research did not address the identified uncertainty, but CED research should be designed to focus on these.

CONCLUSION
In practice, uncertainties were neither systematically nor completely identified in the analyzed CED schemes. A framework would help to systematically identify uncertainty, and this process should involve all stakeholders. Value of information analysis, and the uncertainties that are not included in this analysis should inform CED research design.

 

Machine Learning for Health Services Researchers

Doupe P, Faghmous J, Basu S.
Value in Health. 2019;22(7):808-815.

BACKGROUND
Machine learning is increasingly used to predict healthcare outcomes, including cost, utilization, and quality.

OBJECTIVE
We provide a high-level overview of machine learning for healthcare outcomes researchers and decision makers.

METHODS
We introduce key concepts for understanding the application of machine learning methods to healthcare outcomes research. We first describe current standards to rigorously learn an estimator, which is an algorithm developed through machine learning to predict a particular outcome. We include steps for data preparation, estimator family selection, parameter learning, regularization, and evaluation. We then compare 3 of the most common machine learning methods: (1) decision tree methods that can be useful for identifying how different subpopulations experience different risks for an outcome; (2) deep learning methods that can identify complex nonlinear patterns or interactions between variables predictive of an outcome; and (3) ensemble methods that can improve predictive performance by combining multiple machine learning methods.

RESULTS
We demonstrate the application of common machine methods to a simulated insurance claims dataset. We specifically include statistical code in R and Python for the development and evaluation of estimators for predicting which patients are at heightened risk for hospitalization from ambulatory care-sensitive conditions.

CONCLUSION
Outcomes researchers should be aware of key standards for rigorously evaluating an estimator developed through machine learning approaches. Although multiple methods use machine learning concepts, different approaches are best suited for different research problems.


August 2019


Per-Prescription Drug Expenditure by Source of Payment and Income Level in the United States, 1997 to 2015

Tang W, Xie J, Kong F, Malone DC.
Value in Health. 2019;22(8):871-877.

OBJECTIVE
To evaluate expenditures and sources of payment for prescription drugs in the United States from 1997 to 2015.

METHODS
The Medical Expenditures Panel Survey (MEPS) was used for this analysis. Individuals with one or more prescription medicines were eligible for inclusion. Outcomes were the inflation-adjusted cost per prescription across all payment sources (self or family, public, private, and other sources) before and after the Medicare Part D benefit and the Affordable Care Act.

RESULTS
The cost per prescription increased from $38.56 in 1997 to $73.34 in 2015. Nevertheless, consumers’ out-of-pocket expenditures decreased from $18.19 to $9.61, whereas public program expenditures per prescription increased from $5.61 to $34.43 over this time. Out-of-pocket expenditures of individuals in the low-income group and near-poor group had larger declined percentages from 51.4% to 20.4% and 46.5% to 17.2% relative to individuals in higher-income groups before and after implementation of the Medicare Part D, respectively. Over 90% prescription purchases were covered by medical insurance by 2015. The per-prescription cost for medications consumed by uninsured individuals increased at a lower rate from $31.83 to $54.96 versus $40.12 to $75.58 for privately insured and $36.00 to $70.96 for publicly insured (P .001).

CONCLUSION
Prescription drugs expenditures have increased over the past 2 decades, but public sources now pay for a growing proportion of prescription drugs cost regardless of health insurance coverage or income level. Out-of-pocket expenditures have significantly decreased for persons with lower incomes since the implementation of Medicare Part D and the Affordable Care Act.

An Ethical Analysis of Coverage With Evidence Development

Carter D, Merlin T, Hunter D.
Value in Health. 2019;22(8):878-883.

ABSTRACT
Sometimes a government or other payer is called on to fund a new health technology even when the evidence leaves a lot of uncertainty. One option is for the payer to provisionally fund the technology and reduce uncertainty by developing evidence. This is called coverage with evidence development (CED). Only-in-research CED, when the payer funds the technology only for patients who participate in the evidence development, raises the sharpest ethical questions. Is the patient coerced or induced into participating? If so, under what circumstances, if any, is this ethically justified? Building on work by Miller and Pearson, we argue that patients have a right to funding for a technology only when the payer can be confident that the technology provides reasonable value for money. Technologies are candidates for CED precisely because serious questions remain about value for money, and therefore patients have no right to technologies under a CED arrangement. This is why CED induces rather than coerces. The separate question of whether the inducement is ethically justified remains. We argue that CED does pose risks to patients, and the worse these risks are, the harder it is to justify the inducement. Finally, we propose conditions under which the inducement could be ethically justified and means of avoiding inducement altogether. We draw on the Australian context, and so our conclusions apply most directly to comparable contexts, where the payer is a government that provides universal coverage with a regard for cost-effectiveness that is prominent and fairly clearly defined.

Incorporating Affordability Concerns Within Cost-Effectiveness Analysis for Health Technology Assessment

Lomas JRS.
Value in Health. 2019;22(8):898-905.

BACKGROUND
Recent policy developments and journal articles have emphasized a divergence: when interventions are found to be cost-effective but unaffordable. This apparent paradox reflects a conventional practice of cost-effectiveness analysis that does not properly evaluate the opportunity costs of an intervention that imposes non-marginal costs on the healthcare system.

OBJECTIVE
Taking the perspective of an exogenously resource constrained decision maker, this paper presents a framework by which concerns for affordability can be appropriately incorporated within cost-effectiveness analysis.

METHODS
A net benefit framework is proposed where health opportunity costs are estimated for each simulation iteration within each time period. The framework is applied to a hypothetical case study based on the recent experience of the English NHS with new hepatitis C drugs.

RESULTS
Under the proposed framework, but not under conventional cost-effectiveness analysis, estimates of health opportunity costs differ between scenarios involving different profiles of budget impact even when their net present value, or expected value, are the same.

CONCLUSION
The framework presented here reflects the importance of the scale of budget impacts along with their uncertainty distribution and time profile. In doing so it resolves issues with the conduct of conventional cost-effectiveness analysis where affordability concerns are not explicitly incorporated.

Patient-Reported Outcomes in Orphan Drug Labels Approved by the US Food and Drug Administration

Hong YD, Villalonga-Olives E, Perfetto EM.
Value in Health. 2019;22(8):925-930.

OBJECTIVES
In recent years, there has been increasing recognition of the need to assess treatment benefit from the patient’s perspective. The extent of patient-reported outcome (PRO) data included in labeling for rare disease treatment is largely unknown. The objective of this study was to review trends over time for PRO-based labeling granted by the US Food and Drug Administration (FDA) for orphan drugs.

STUDY DESIGN
Review of FDA package inserts.

METHODS
Products included in this analysis were all new molecular entities (NMEs) and biologic license applications (BLAs) with orphan designations approved by the FDA from 2002 through 2017. For identified products, package inserts were reviewed to determine the number and type of PRO claim(s) granted, endpoint status, and PRO measure named. Two trends were analyzed: (1) over all years 2002 to 2017 and (2) 2002 to 2017 stratified into 3 periods (before draft FDA PRO guidance [2006], between draft and final guidance release, and after final guidance [2009] release.

RESULTS
A total of 156 NMEs and BLAs with orphan designations were approved between 2002 and 2017. Of these, 13 products (8.3%) had PRO-based labeling, and 7 of 13 were symptom-related. The percent of orphan drugs approved with PRO-based labeling between 2002 and 2005, 2006 and 2008, and 2009 and 2017 was 0, 10.5, and 9.9, respectively.

CONCLUSION
In FDA-approved labeling for orphan therapies, PRO measures used as primary and secondary endpoints increased after draft FDA PRO guidance release but remained relatively low thereafter. It is important to understand barriers to PRO measure use to ensure that treatments capture perspectives of patients with rare diseases.


September 2019


As Value Assessment Frameworks Evolve, Are They Finally Ready for Prime Time?

Dubois RW, Westrich K.
Value in Health. 2019;22(9):977-980.

BACKGROUND
Value assessment frameworks have emerged as tools to assist healthcare decision makers in the United States in assessing the relative value of healthcare services and treatments. As more healthcare decision makers in the United States—including state government agencies, pharmacy benefit managers, employers, and health plans—publicly consider the adoption of value frameworks, it is increasingly important to critically evaluate their ability to accurately measure value and reliably inform decision making.

OBJECTIVE
To examine the evolution of the value assessment landscape in the past two years, including new entrants and updated frameworks, and assess if these changes successfully advance the field of value assessment.

METHODS
We analyzed the progress of the three currently active value assessment frameworks developed by the Institute for Clinical and Economic Review, the Innovation and Value Initiative, and the National Comprehensive Cancer Network, against six key areas of concern.

RESULTS
Value assessment frameworks are moving closer to meeting the challenge of accurately measuring value and reliably informing healthcare decisions. Each of the six concerns has been addressed in some way by at least one framework.

CONCLUSIONS
Although value assessments are potential inputs that can be considered for healthcare decision making, none of them should be the sole input for these decisions. Considering the limitations, they should, at most, be only one of many tools in the toolbox.

Barriers and Facilitators to Model Replication Within Health Economics

McManus E, Turner D, Gray E, Khawar H, Okoli T, Sach T.
Value in Health.  2019;22(9):1018-1025.

BACKGROUND
Model replication is important because it enables researchers to check research integrity and transparency and, potentially, to inform the model conceptualization process when developing a new or updated model.

OBJECTIVE
The aim of this study was to evaluate the replicability of published decision analytic models and to identify the barriers and facilitators to replication.

METHODS
Replication attempts of 5 published economic modeling studies were made. The replications were conducted using only publicly available information within the manuscripts and supplementary materials. The replicator attempted to reproduce the key results detailed in the paper, for example, the total cost, total outcomes, and if applicable, incremental cost-effectiveness ratio reported. Although a replication attempt was not explicitly defined as a success or failure, the replicated results were compared for percentage difference to the original results.

RESULTS
In conducting the replication attempts, common barriers and facilitators emerged. For most case studies, the replicator needed to make additional assumptions when recreating the model. This was often exacerbated by conflicting information being presented in the text and the tables. Across the case studies, the variation between original and replicated results ranged from −4.54% to 108.00% for costs and −3.81% to 0.40% for outcomes.

CONCLUSION
This study demonstrates that although models may appear to be comprehensively reported, it is often not enough to facilitate a precise replication. Further work is needed to understand how to improve model transparency and in turn increase the chances of replication, thus ensuring future usability.

Are Healthcare Choices Predictable? The Impact of Discrete Choice Experiment Designs and Model

de Bekker-Grob EW, Swait JD, Kassahun HT, Bliemer MCJ, Jonker MF, Veldwijk J, Cong K, Rose JM, Donkers B.
Value in Health. 2019;22(9):1050-1062.

BACKGROUND
Lack of evidence about the external validity of discrete choice experiments (DCEs) is one of the barriers that inhibit greater use of DCEs in healthcare decision making.

 

OBJECTIVES
To determine whether the number of alternatives in a DCE choice task should reflect the actual decision context, and how complex the choice model needs to be to be able to predict real-world healthcare choices.

METHODS
Six DCEs were used, which varied in (1) medical condition (involving choices for influenza vaccination or colorectal cancer screening) and (2) the number of alternatives per choice task. For each medical condition, 1200 respondents were randomized to one of the DCE formats. The data were analyzed in a systematic way using random-utility-maximization choice processes.

RESULTS
Irrespective of the number of alternatives per choice task, the choice for influenza vaccination and colorectal cancer screening was correctly predicted by DCE at an aggregate level, if scale and preference heterogeneity were taken into account. At an individual level, 3 alternatives per choice task and the use of a heteroskedastic error component model plus observed preference heterogeneity seemed to be most promising (correctly predicting >93% of choices).

CONCLUSION
Our study shows that DCEs are able to predict choices—mimicking real-world decisions—if at least scale and preference heterogeneity are taken into account. Patient characteristics (eg, numeracy, decision-making style, and general attitude for and experience with the health intervention) seem to play a crucial role. Further research is needed to determine whether this result remains in other contexts.

 

Contribute to Value in Health

  • Submit a manuscript/revision
  • Become a peer reviewer
  • Check manuscript status

Scholar One

 

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×