Evaluating New Health Technologies and Disease Burden in Developed Countries

Sep 1, 2012, 00:00 AM
10.1016/j.jval.2012.04.009
https://www.valueinhealthjournal.com/article/S1098-3015(12)01606-3/fulltext
Section Title : Letters to the Editor
Section Order : 28
First Page : 987
To the Editor
We read with interest the article by Martino and colleagues published recently in Value in Health. Their study examines the relationship between the reporting of new and emerging health technologies uploaded onto the EuroScan database (from 2000 to 2009) and the burden of disease in 17 developed countries, most of them in Europe []. The motivation for their study is the ongoing greater use of disease burden measures in research and innovation priority setting and, in particular, horizon scanning or early awareness and alert activities. Thus, the authors chose 1479 individual indications corresponding to 1371 unique technologies (45% of them were drugs and 23% were devices) as the output measure for innovation. Overall, they suggest a weak association between innovation and disease burden in terms of disability-adjusted life-years (DALYs). Nonetheless, the article raised several issues but failed to cite some relevant articles published in the last 5 years in this field [,,,,,].The authors argue that “[o]nly Lichtenberg used output measures of innovation and found a positive relationship [with disease burden] among developed countries (…) based primarily on pharmaceuticals launched; [and that] drugs currently on sale and relevant published articles were used as innovation outcomes in additional analyses, but these were limited to the United States and cancer, respectively” []. In this regard, the authors failed to cite a report (published in 2010) with some degree of overlap, in which we further discussed questions about the current extent of the dilemma in pharmaceutical innovation [,]. In our work [], the full cohort of human-use drugs authorized by the European Medicines Agency (1995–2009) was evaluated. We particularly found that there was a positive correlation between DALYs and new drug development. Interestingly, the main disease categories in terms of the number of innovative drugs were cancer, infectious diseases, and blood and endocrine disorders (accounting for 47% of new molecules). Some conditions appeared to be neglected (related to the disease burden generated in the population) as in the case of neuropsychiatric disorders, cardiovascular diseases, respiratory diseases, and so on. Conversely, in the study by Martino and colleagues, the authors found that the main disease categories in terms of the number of innovative technologies were cancer, cardiovascular diseases, and neuropsychiatric disorders. Comparing our correlation coefficients with those obtained by Martino and colleagues, the magnitude of the association between DALYs and innovation was weaker in our study: correlation coefficients for developed high-income countries of 0.61 (P = 0.006) versus 0.72 (P < 0.001). There are, however, several important differences between studies. We studied drugs (new molecules and marketing authorizations), whereas Martino and colleagues studied technologies including devices and diagnostics as well. Our analyses focused only on the main indications matched with the categories of the disease classification system defined in the Global Burden of Disease (GBD) study, whereas Martino and colleagues focused on all therapeutic indications for a technology (e.g., each of the multiple different indications for a monoclonal antibody were considered as equally significant), which may not always be representative of innovation. The authors correctly observe, as we have previously documented in cost-effectiveness research [], that disaggregating broader categories into specific diseases further weakened the association. We believe that the most important issue of Martino and colleagues' study, however, is that there is some reason to believe that more misclassification has occurred, particularly among “other” subcategories (e.g., “other cardiovascular diseases” and “other malignant neoplasms”) than in broader categories. We recognize that some arbitrary nature is involved in classifying technologies into specific disease conditions, and researchers may have classified them in a different way. They showed that nearly 40% (507 of 1371) of the specific disease indications accounting for the highest numbers of technologies were paradogically “unspecific” ones. As they mentioned, “other cardiovascular diseases” and “other malignant neoplasms” had the higher number of innovations (150 and 85, respectively), suggesting that “innovation is disproportionately strong in cancer and nonischemic heart disease.”
In the GBD 1990 study [], one of the most significant barriers to accurately determine the cause of disease burden was the widespread use of nonspecific cause of death codes, such as those for ill-defined cardiovascular, cancers, and injury codes. Recent articles [,] stress that garbage codes negatively impact the public health utility of cause-of-death data. Correction algorithms were applied in the GBD study to resolve problems of miscoding for cardiovascular diseases (mainly involving redistribution of deaths coded to heart failure, ventricular dysrhythmias, or ill-defined heart disease) or cancer (involving redistribution of deaths coded to secondary sites or ill-defined primary sites). Particularly, heart failure was not an underlying cause of death according to the GBD definition but rather an intermediate cause of death with a diverse range of possible underlying causes of death. Instead heart failure was classified under coronary heart disease. Similarly, cancer deaths coded for malignant neoplasms of other and unspecified sites including those whose point of origin cannot be determined and secondary and unspecified cancers were redistributed across the malignant neoplasm categories within each age-sex group [,,].
Therefore, miscoding and misclassification may have had a clear impact on their study findings at the level of specific diseases, and we believe that results should be viewed carefully. To demonstrate that in part, we present an alternative version of their Figure 2 excluding “other (nonspecific) conditions” (with a selection of highest ranking specific causes for reported technologies from Table 2 in the article), illustrating that there was no evidence in the data of a true correlation between DALYs and innovation for particular disease conditions (R2 linear = 0.06; correlation coefficient r = 0.24; P = 0.17) (Fig. 1).

Finally, we strongly disagree with the authors' most surprising conclusion that “[t]he results do not support previous reports of a positive relationship between burden of disease and innovation, but accord with evidence of notable discrepancies among key groups.” Perhaps, Martino and colleagues may wish to reconsider their results and conclusions in light of all the above.
https://www.valueinhealthjournal.com/action/showCitFormats?pii=S1098-3015(12)01606-3&doi=10.1016/j.jval.2012.04.009
HEOR Topics :
  • Cost/Cost of Illness/Resource Use Studies
  • Economic Evaluation
  • Health Technology Assessment
  • Systems & Structure
Tags :
Regions :
  • Global