The National Institute for Health and Clinical Research (NICE) was established in 1999 with two functions [
1]. First, it was required to undertake “appraisals” of new and established interventions with a view to deciding whether they were appropriate for use in the English and Welsh National Health Service, taking account of both clinical effectiveness and cost-effectiveness. Second, NICE was mandated to develop clinical practice guidelines, again taking account of cost-effectiveness as well as clinical effectiveness. NICE's guidance programs have grown substantially [
2] since those earlier days and now include advice on the safety and efficacy of new procedures, measures to improve public health, and a special program assessing the value of potentially cost-saving devices and diagnostics.
In 1999, the use of cost-effectiveness analysis in planning of services to be delivered by Britain's National Health Service was not new but had largely been carried out covertly. The creation of NICE led to cost-effectiveness becoming an overt component of decision making in the National Health Service and was enthusiastically supported by UK's community of health economists. The methods that NICE developed for assessing cost-effectiveness were heavily influenced by the prior work of various academic institutions, and NICE, from the outset, indicated that its preferred approach to the economic assessment of interventions in health (and later public health) was by cost-utility analysis.
The impact of health economists and other outcomes researchers on NICE's activities can be seen in a number of ways. First, the need to calculate quality-adjusted life-years (QALYs) has led to calls for the inclusion of more relevant end points in clinical trials and for the trials themselves to have longer follow-up. Second, the need to consider relevant treatment alternatives has led to developments in the methods of evidence synthesis, including indirect and mixed treatment comparisons in situations in which head-to-head clinical trials do not exist. Third, evidence on clinical effectiveness and cost-effectiveness by subgroups of the patient population has led NICE o refine its guidance, targeting the patient groups that will benefit the most from therapy.
Perhaps in an indirect way, the need to explain difficult economic concepts has also led NICE to bolster its efforts to be as transparent as possible about its methods of analysis and the guidance that follows. Certainly, the recent interest from pharmaceutical companies, in early dialogue with NICE about their research programs for some products, is a very direct result of the recognition that good evidence on the clinical effectiveness and cost-effectiveness of new products needs to be produced.
Despite these massive contributions, challenging problems remain. Nevertheless, I anticipate that the health economics community—given time—will help resolve them. The most urgent issues are as follows:
- The threshold: The basis for determining the “threshold” for distinguishing cost-effective from cost-ineffective interventions requires further work. The notion that the cost-effectiveness of all interventions is ranked and that the “threshold” is the point at which the health care budget is exhausted is totally impractical.
- Equity: The conventional approach to the economic evaluation of interventions is inherently utilitarian unless decision makers adopt flexibility in reaching conclusions that take account of societal preferences. NICE recognized the importance of such “social value judgments” from the beginning [3]. It therefore established a Citizens Council in an attempt to elicit societal preferences in resource allocation. This has worked well, and NICE's decision-making bodies incorporate social values in drawing their conclusions. It would be better, though, if—at least in part—such judgments were to be incorporated quantitatively, rather than qualitatively, into decision making. In other words, “equity weighting” needs to be developed in a robust manner so that allocative decision-making becomes more explicit.
- The QALY: The QALY remains (despite its acknowledged imperfections) the bedrock of cost-utility analysis. NICE owes a great debt to those who devised and validated the EuroQol five-dimensional questionnaire and who continue to attempt to improve it. Nevertheless, I am uneasy about the mantra of “a QALY is a QALY is a QALY.” It means that an increase in utility from 0.3 to 0.5 is valued the same as an increase from 0.7 to 0.9. I am not sure this is fair. It certainly fails to meet John Rawls approach [4] to distributive justice, which states that resources should be allocated in a manner that brings the greatest benefit to the least-advantaged members of society. This would mean, in cost- utility analysis, using a proportionate—rather than a simple incremental— approach in estimating the QALY gained.
-
Clinical guidelines: The use of cost-effectiveness analysis in the development of clinical guidelines is still very simplistic. At NICE, we examine the cost-effectiveness of individual components but not the pathway of care as a whole. Methods for doing so need to be developed as well as for determining what “thresholds” should be applied because it is not inherently obvious (to me at least) that these would be the same as those applied for individual interventions.
Health economics and outcomes research community has done much for health care, but there is still much more to be done, and I am confident that the community, as a whole, will rise to the challenge.