Decision Models Need to be “Fit for Purpose” for Decision-Making- Response to Caro et al.
Jul 1, 2007, 00:00
10.1111/j.1524-4733.2007.00177.x
https://www.valueinhealthjournal.com/article/S1098-3015(10)60618-3/fulltext
Title :
Decision Models Need to be “Fit for Purpose” for Decision-Making- Response to Caro et al.
Citation :
https://www.valueinhealthjournal.com/action/showCitFormats?pii=S1098-3015(10)60618-3&doi=10.1111/j.1524-4733.2007.00177.x
First page :
Section Title :
Open access? :
No
Section Order :
12
We thank Dr Caro et al. for their reflections. It was not the purpose of our article to suggest that decision models based on patient-level simulations (PLS) were inappropriate. On the contrary, we accept that there may well be situations where the specifics of disease prognosis and treatment effects will necessitate such an approach.
The key feature of any decision model, however, is to be “fit for purpose” for decision-making. That is, such models need to generate estimates of the expected cost-effectiveness, decision uncertainty associated with each option and the cost of uncertainty, and hence potential value of further research [1]. We are impressed that Caro et al. find the computational task associated with generating these required outputs a trivial task in PLS models. As evidenced by our review, others have not found this quite so straightforward: only one out of six PLS models included in technology assessments for the National Institute for Health and Clinical Excellence in the UK conducted probabilistic analysis compared with 16 out of 24 cohort models [2]. This is probably not surprising in PLS models with the levels of complexity apparently advocated in most situations by Caro et al. Such models may require 10,000 individual simulations to provide a single stable estimate of expected cost-effectiveness—Caro et al.’s implication of 1000 patients in a complex model seems low. A further level of 10,000 draws from parameter distributions to reflect decision uncertainty (i.e., 10,000 ¥ 10,000 = 100,000,000 simulations). Further levels of simulation would be required fully to evaluate the value of perfect information in general and for individual parameters. For most modelers, this is far from a trivial undertaking. If Caro et al. have innovative methods (other than emulators such as the use of a Gaussian process [3], which they seem to disparage) to speed the process up, it would be of benefit to the modeling community for these to be published.
It is important to remember that all models are seeking to approximate reality rather than replicate it—“all models are wrong, but some are useful” in George Box’s words [4]. Of course, this is as true with decision models based on PLS as any other—if that were not the case, then why do such analyses stop at the level of individual patients rather than modeling at the organ system, cellular, molecular, or even atomic level! Given the nature of the decision problem the model is seeking to inform, judgments always have to be taken regarding the appropriateness of simplifying assumptions. We would argue that the parameterization and computational burden of PLS compared with cohort models suggests that analysts should carefully assess whether these additional costs can be justified in terms of their impact on the ultimate decision. We believe that most of the features of a disease which Caro et al. feel necessitate the use of PLS can be appropriately handled in cohort models. There may be a limit to this, and PLS may be deemed necessary to reflect the complexities of a disease process, but analysts cannot avoid the need to quantify decision uncertainty and value of information, the omission of which most PLS modelers in our review were guilty.
We find little to disagree with Caro et al.’s closing comment that models need to be as realistic as necessary to inform decisions. However, there is no reason why this should imply PLS in each and every case. —Susan Griffin, Karl Claxton, Neil Hawkins, and Mark Sculpher, University of York, York, UK.
References
1 Claxton K, Sculpher M, McCabe C, et al. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ 2005;14:339–47.
2 Griffin S, Claxton K, Hawkins N, Sculpher MJ. Probabilistic analysis and computationally expensive models: necessary and required? Value Health 2006;9:244–52.
3 Stevenson MD, Oakley J, Chilcott JB. Gaussian process modelling in conjunction with individual patient simulation modelling: a case study describing the calculation of cost-effectiveness ratios for the treatment of established osteoporosis. Med Decis Mak 2004;24:89–100.
4 Box GEP. Robustness in the strategy of scientific model building. In: Launer RL, Wilkinson GN, eds. Robustness in Statistics. New York: Academic Press, 1979.
Categories :
- Methodological & Statistical Research
- Missing Data
- Modeling and simulation
- PRO & Related Methods