The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned [Editor's Choice]

Abstract

Interest and investment in the development of tools or methods that rely on artificial intelligence (AI) algorithms to improve health or healthcare are increasing. Propelling this renewed interest is a growing amount of electronic data about individual health, population health, and consumer choice and advances in the machine learning methods.

Although harnessing these data to develop AI tools that can improve healthcare delivery and inform diagnosis and treatment decisions has the potential to significantly improve health systems globally, there are various ethical, regulatory, and economic challenges that must be addressed.

This Value in Health themed section includes a series of articles that not only demonstrate the value of AI for healthcare but also illustrate key barriers to widespread adoption of these technologies. Of the 26 articles submitted to the themed section, 6 were accepted for publication (23%). Articles were submitted from several regions of the world including Asia, Europe, and North America, showing a global interest in the contribution of AI in healthcare.
This themed section covers 3 main questions:

1.  Can AE be evaluated like any other health technology

2.  What is the current state of knowledge about the efficiency of AI in healthcare?

3.  What is the level of acceptance and adoption of healthcare AI tools?



Can AI Be Evaluated Like Any Other Technology?

As a growing number of AI devices are entering the healthcare market, health technology assessment (HTA) bodies are facing new challenges evaluating the efficiency of these devices. Indeed, a major concern raised in this themed section relates to the need to adapt current health economic evaluation models to determine the value of AI devices. The articles identify several methodological challenges.

According to Hendrix et al, evaluators need to adapt the design of health economics models to the anticipated use of the AI tool and carefully consider how to define the intervention arm and the standard of care comparator. Evaluating the efficiency of AI is more complex than pharmaceutical products because the intervention arm receiving the device often lacks uniformity. As underlined by the authors, “If HTA is being used to decide how an AI-based tool is implemented, health economists should consider including as comparators all the potential ways that an AI could be used alongside clinicians.” Consequently, AI-based evaluations are likely to include “radically greater numbers of comparators” compared with the traditional models that are used to assess the efficiency of pharmaceutical products. This likely contributes to why, as Voets et al found, health economic evaluations of AI tools to date have primarily been conducted in the domain of medical image analysis, where the use of AI can easily be harmonized across different medical practices. Hendrix et al highlight the extent to which economic evaluations need to be adapted depending on different situations. For instance, if an AI tool’s performance is constantly improving in response to more data, researchers will need to implement dynamic health economics evaluations. Moreover, in all evaluations, there is a need to balance the potential positive outcomes of using AI tools with the risk of negative side effects, related to, for instance, the possible reduction of providers’ skills because of the automatization of some processes.

What Is the Current State of Knowledge About the Efficiency of AI in Healthcare?

Likely at least in part because of the challenges with evaluating AI tools, the articles in this themed section show that the current state of scientific knowledge about the economic impact of AI in healthcare is low. Based on a systematic literature review, Voets et al found only a limited number of economic evaluations of AI tools, most of which “focus on cost impacts rather than health impacts.” Even in cases where economic evaluations of healthcare AI tools do exist, Voets et al found that they are “often of suboptimal quality.” Part of the reason for this, as explained by Hendrix et al, relates to “difficulties around collecting data on... clinical impacts [which can] make value assessment challenging.”

The authors also warn that existing evaluations may be biased because of potential unforeseen effects that can be associated with the adoption of AI tools, as “many uncertainties remain about how these technologies can be used to create rather than destroy value.” For example, although AI tools can improve the health of some targeted populations and enhance the access to diagnosis and treatment for the general population, they can also sometimes contribute to health disparities. Finally, there is also a growing concern related to the generalizability of current evidence supporting the economic impact of AI devices. Indeed, there is a lack of evidence regarding the long-term impact of AI on health outcomes, given that less than half of the articles reviewed by Voets et al used a time horizon of less than 1 year. The lack of evidence contrasts with the growing number of applications and devices being introduced in the market. This calls for an increased use of real-world data to evaluate the impact of AI in the general population.

Despite these concerns, some results presented in this themed section suggest that AI tools are likely to increase the efficiency of the healthcare system and that it is possible to provide robust evidence of the economic impact of AI tools using real-world data. For example, Rodriguez et al demonstrate how real-world data can be used to model the impact of an AI risk–prediction tool on health outcomes to inform decisions about clinical adoption of new lung transplant referral models for people with cystic fibrosis. Additionally, de Vos et al evaluated the cost-effectiveness of an AI tool designed to predict and prevent untimely intensive care unit discharge. They found that the use of the AI tool was cost-effective because it led to improved decision-making processes for intensive care unit discharge.


What Is the Level of Acceptance and Adoption of Healthcare AI Tools?

Although demonstrating the value and efficiency of healthcare AI tools is important, it is not sufficient for promoting the adoption and scaling of these tools in clinical practice. Two articles in this themed section highlight additional challenges associated with the development, deployment, and use of healthcare AI tools that are hindering their overall acceptance and adoption. One important challenge, discussed by Hashiguchi et al, is that although there is a significant and growing amount of electronic health-related data, much of these data are of poor quality and are rarely shared. To promote appropriate data sharing, the authors discuss the need for improved data governance structures that ensure “high quality data are available [for relevant actors] and secure” as well as adoption of internationally agreed upon standards for data terminology. Currently, there are numerous initiatives across the globe focused on improving electronic data quality and governance, which will hopefully increase the availability of high-quality data to support the development and validation of healthcare AI tools in the future., ,

Nevertheless, even in cases where there are sufficient data, important implementation challenges exist—the most significant of which may be the need to garner public and provider trust in healthcare AI devices. Based on a large online survey of the Dutch population, Yakar et al found that respondents from the general public lacked high levels of trust in AI tools for dermatology, radiology, and surgery. Although this study demonstrates that trust levels differ based on medical specialty and individual demographic characteristics, such as age and education level, the authors suggest that to increase trust it is important to provide more information to the public about how healthcare AI tools work. In agreement with this, Hashiguchi et al suggest the need for increased transparency, public communication, and stakeholder engagement in defining appropriate data governance structures to support the development of healthcare AI devices to further promote trust. These authors also discuss the need for additional education to support the appropriate use of AI devices in clinical and public health settings.

Conclusions

Healthcare AI tools have the potential to improve the efficiency of care pathways by allowing providers to better determine patients’ diagnosis and orientate them toward more effective care options. Even though, as Hendrix et al state, “it cannot be assumed that AI will increase productivity in all circumstances,” healthcare AI tools have the potential to improve the organization of healthcare systems and reduce healthcare spending, given that the positive impact of these tools on care practitioners’ productivity could reduce the importance of the so-called Baumol’s effect in the healthcare sector.

Although the potential of these technologies is significant, articles in this themed section show that demonstrating the economic value of healthcare AI tools can be challenging and current evidence regarding the efficiency of these tools is limited. In addition, more evidence is needed to determine the organizational impact of AI. A prolonged absence of evidence supporting the economic impact of AI tools could be detrimental to its deployment in healthcare. Indeed, efficiency analyses are needed to build business models and determine value-sharing agreements between manufacturers and social insurance systems. Therefore, it is crucial to go beyond traditional cost-effectiveness evaluations and adapt current HTA models to better determine the efficiency of AI tools.

In that regard, AI is currently facing the same issues that telemedicine was facing a decade ago in many countries. Policy makers expected a level of evidence that, in some instances, was too high and too complex for many telemedicine manufacturers to achieve. In many countries, requiring telemedicine companies to provide the same level of scientific evidence as in the pharmaceutical market blocked the national implementation of telemedicine for several years, despite the fact that some evaluations showed that this technology could contribute to cost savings and improve healthcare delivery. Policy makers and HTA bodies in many countries waited until the COVID-19 public health emergency to ease market access of telemedicine devices. Similar barriers are currently seen in the healthcare AI market. Although AI has received considerable public attention, most healthcare AI devices are just at the research stage. Additionally, this themed section shows that the public is not ready to trust these devices and that patients and providers may need more information and assurance to feel comfortable with AI in healthcare settings.

Despite these hurdles, it is likely that the use of AI in healthcare settings will continue to increase for 2 main reasons. First, many countries are currently organizing better access to real-world data, which will certainly increase the number and the quality of health economic evaluations of AI devices. For example, in Europe, France has launched a health data hub to ease access to data collected from the National Health Insurance System. These initiatives are promoting better governance and regulation of data use, as well as better and more secure data infrastructure. Second, information sharing and learning by doing will certainly allow for the more efficient development of AI in healthcare. Like any innovation, the efficiency of AI will certainly be driven by its adoption and usage by doctors and patients, which cannot be fully anticipated by ex ante health economic evaluations.

Authors

Danielle Whicher Thomas Rapp

Your browser is out-of-date

ISPOR recommends that you update your browser for more security, speed and the best experience on ispor.org. Update my browser now

×