Plain Language Summary
What is it about? The study examines how health technology assessment bodies, like the National Institute for Health and Care Excellence (NICE), use cost comparison analysis to manage the increasing demand for evaluating health technologies. This analysis requires proving that 2 treatments are clinically similar. The topic is vital because it impacts how quickly new treatments can become available to patients. The researchers addressed the problem of determining how to assess similarity when there are no direct head-to-head treatment comparisons. Currently, there is little guidance on using indirect treatment comparisons to assume equivalence. The study proposes that adopting formal methods could better address uncertainties in demonstrating treatment similarity. The central contribution is providing recommendations for using these methods to improve decision making in health technology assessments.
How was the research conducted? The study is based on a systematic literature review, which is a method of collecting and analyzing previous research studies on a specific topic. The researchers applied this approach by reviewing articles and past appraisals by NICE to identify methods for determining equivalence without head-to-head trials. They conducted 2 complementary reviews: one focused on published methods for indirect treatment comparisons, and the other on technology appraisals that claimed similarity. The methods used included data analysis of articles and case studies to understand how similarity was determined. The researchers studied methodological papers, case studies, and NICE appraisals to explore how evidence of similarity is presented. This method was chosen to comprehensively assess the current state of practice and identify gaps in methodologies.
What were the results? The central finding is that while methods for assessing equivalence in indirect treatment comparisons are emerging, they are not yet widely applied in practice. An important additional finding is that most appraisals relied on narrative summaries rather than formal methods, leading to uncertainties often resolved through expert input. A surprising result was that despite the availability of formal methods, none of the reviewed appraisals incorporated them, highlighting a gap between emerging techniques and their application. To support future implementation, the authors produced a series of practice recommendations and created R code (a programming language often used in statistical analysis) that could be used to visualize results.
Why are the results important? These results are significant for health technology assessment agencies as they highlight the need for standardized methods to enhance the reliability of cost comparison analyses. The findings could change clinical practice by encouraging the use of formal methods, reducing uncertainties, and improving the speed of decision making. Patients and healthcare providers specifically benefit as it may lead to quicker access to new treatments. Long-term, these results could lead to more consistent and transparent evaluations in health technology assessments, influencing future developments in the field.
What are the strengths and weaknesses of this study? A major strength of the study is its comprehensive review of both methodological literature and practical appraisals, providing a broad understanding of the current landscape. However, a limitation is that it focuses on a specific context, which may not capture all global practices. Future research could explore the application of these methods in different health technology assessment settings to further expand understanding and implementation.
Note: This content was created with assistance from artificial intelligence (AI) and has been reviewed and edited by ISPOR staff. For more information or for inquiries on ISPOR’s AI policy, click here or contact us at info@ispor.org.
Authors
Dawn Lee Alex Allen Alan Lovell Ahmed Abdelsabour Edward C.F. Wilson G.J. Melendez-Torres