Confronting the Reality of AI/ML in Care Delivery

Provider organizations are far less optimistic about artificial intelligence (AI) than they were four years ago—even though more of them than ever are using AI in clinical applications. This apparent tension raises important questions about the future of AI and machine learning (ML) in health care. Does flagging optimism in an environment of increasing AI adoption indicate a “tech-lash”—that is, a backlash against overhyped technology? Is it simply hard-earned and clear-eyed realism? Or is it a missed opportunity?

Advisory Board recently conducted a survey of strategic and analytics leaders at 250 provider organizations to learn about how those organizations are currently using and plan to use AI.1 The survey asked many of same questions asked in a similar survey conducted in 2018, allowing us to compare how provider organizations’ perspective has evolved. Early insights from that data show that providers have less faith than in previous years that AI will power a transformation of health care. Many continue to struggle with data itself. At the same time, the least-adopted clinical or population health application of AI in 2022 is more widespread than the most-adopted application was in 2018.

Provider organizations’ expectations for AI shift from transformational to incremental

Overall sentiment about AI has remained positive among provider organizations since 2018, although it has not grown. About 63% of respondents had positive expectations for AI in 2022, answering either “we believe AI will become a transformative, essential part of our health system” or “we believe AI will deliver incremental value.” A similar proportion, 64%, expressed a positive outlook on AI in 2018. However, the percentage of organizations expecting transformational change was cut nearly in half: only 19% expect AI to become transformational and essential at their organization, down from 37% in 2018. Expectations for incremental value from AI climbed from 27% in 2018 to 43% in 2022.

More provider organizations are also offering a negative assessment of AI than in 2018. Then, the share of organizations who did not expect to get value from AI was 5%. In 2022, that increased to 8%, and the increase was especially dramatic among small organizations (defined as those who actively care for fewer than 100,000 individuals annually). In 2018, only 3% of small provider organizations said they did not expect to get value from AI. That proportion has tripled, with 9% of small organizations saying in 2022 that they expect no value from AI.

This shift occurs as more organizations are currently using AI in clinical and population health applications. Provider organizations are already using AI for care variation reduction/protocol compliance (16%), risk and care gap identification (14%), precision medicine (18%), and creating new standards of care (14%). Four years ago, the most widely adopted use of AI in clinical care (care variation reduction/protocol compliance) was prevalent in only 12% of organizations. That makes the least-adopted clinical or population health application of AI in 2022 more widespread than the most-adopted application was in 2018 (among the applications included in the survey).

More provider organizations than ever have plans to use AI across those domains in the future. The percentages of organizations with a committed plan to use AI for care variation reduction/protocol compliance (37%), risk and care gap identification (34%), and precision medicine (38%) each exceeded their 2018 levels. The percentage with a plan to use AI for creating new standards of care slipped to 31%, but it had been the most-planned-for application in 2018 (33%).

Where are we going from here?

Shifts in optimism are worrisome, but don’t approach the severity that would indicate a full-blown backlash against AI/ML. Broader adoption and planning for use of AI suggest that provider organizations are not moving toward disinvestment—despite their generally thin margins and escalating labor costs. That said, high-profile, overhyped proclamations about the transformational potential of AI have made commitment to AI harder, not easier. The debacle of IBM Watson Health begs questions about the viability of AI in health care, even though the lesson to learn—especially now that Watson Health has been sold off in pieces—is that necessary ingredients needed for AI/ML have not developed at the same pace as the technology. IBM gobbled up data and software applications, but that alone was insufficient to support the Watson AI. Consider also numerous predictions of radiologists’ replacement with AI, which ignore the real and current value of the technology as an additive, augmentative tool for the radiologist.

Rather than a rejection of the technology, the shift in enthusiasm among provider organizations from “transformational AI” to “incrementally valuable AI” most likely represents an increasingly nuanced understanding of the challenges around AI/ML—especially the challenges of making AI effective in the real world. As adoption increases, health care needs clear-eyed realism about two specific challenges of AI: working with data, and decision-making with data.

Working with data

The lifeblood of AI/ML is data. The reality of health care data is that it is often most accurately described as “messy.” Overall, 16% of provider organizations say low-quality and/or poorly governed data is one of the most significant barriers to adopting AI or predictive modeling. That’s especially true at large organizations (defined as those who actively care for more than 1.1 million individuals annually). Although they tend to be more enthusiastic about AI (32% expect AI to be transformational and essential for their organization, compared to 19% of organizations overall), slightly more than 21% of them say that low-quality and/or poorly governed data is a significant barrier.

As such, one major challenge of working with data is that it just takes a long time to get good data. Data cleaning and standardization from large data sets for use in AI training consistently requires a year or more of work. And that presumes that the necessary data is even accessible in the first place. Overall, 10% of provider organizations say that limited access to needed data is a significant barrier. Again, the larger the organization, the more acute that problem is: 15% of large organizations report limited of access to needed data as a significant challenge.

Working with data also means recognizing the biases embedded in it. Organizations should simply assume that any data used to train AI is biased. Structural bias is so strong in our institutions, our culture, and in the data that those create. It is unrealistic to assume that our health care organizations generate unbiased data. It is much more likely that health care data is biased against those who are already underserved—low-income patients, rural patients, nonwhite patients, and non-cisgender patients.

Decision-making with data

Simply put, existing care delivery systems are not designed to incorporate input from AI/ML. Forget the science fiction of replacing a clinical decision maker like a physician with an algorithm; merely alerting the decision maker that the model has a suggestion is often an enormous task that requires significant changes to and remodeling of workflows. In care delivery environments, those workflows are often complex, hierarchical, and dynamic in ways that the potential of an AI application does not anticipate. Acknowledging the workflow implications of AI is a key first step to transformation, one without which any of the solutions to the challenges of working with data identified above will be for naught.

Even when technical challenges of incorporating AI/ML into care delivery workflows are addressed, challenges will remain for clinicians. As AI/ML contribute more to medical decision-making, clinicians may see those interventions as a threat to their identity, autonomy, and core beliefs about their roles as care providers. AI will not replace clinicians, but its use at scale may make them uncomfortable. These adaptive challenges may require clinicians to rethink their fundamental assumptions about how care is delivered. They may present as concerns about loss of identity (“I didn’t get into medicine to take suggestions from a computer”), loss of autonomy (“I don’t want to be told to when to start treatment”), or quality of care (“how do I know this is even right?”)

Health care leaders must address these adaptive challenges around decision-making in parallel with technical challenges to win clinicians’ buy-in for AI/ML applications in care delivery. Unfortunately, leaders often misjudge the prevalence of adaptive challenges because they tend to be personal and potentially hard for clinicians to talk about. Leaders who are responsible for AI/ML applications often think about implementation issues in technical terms while ignoring the challenges clinicians are having with the personal implications of changing workflows and practice patterns.

Addressing adaptive challenges requires deliberately and overtly incorporating clinicians into prioritization and decision-making about applications of AI/ML. The strategic, legal/regulatory, and cost considerations of AI in care delivery should command attention from executives at the highest levels of any organization, but obviously that does not mean that decisions that impact care delivery can be made in an exclusively top-down manner. The urgency that surrounded overnight implementations of new technology and processes during the early phases of Covid-19 allowed many organizations to make dramatic changes quickly. In many provider organizations, that kind of top-down decision-making continues to prevail, making it hard to gain buy-in from the frontline clinicians who are essential to successful implementation of any care delivery initiative.

Are we being too careful?

Realistic expectations represent a genuine maturity in providers’ understanding and use of AI/ML. The structural, systemic, and cultural changes required to take full advantage of AI/ML in health care are real and must be addressed proactively and incorporated into investment decisions.

That does not mean health care should abandon transformative aspirations. AI/ML can have an outsized, positive impact on health care. The challenges of total cost of care, administrative burden, clinician burnout, and patient experience have not responded to the non-AI solutions in health care’s toolkit. Health care may worry about the cost of AI solutions, but the cost of not trying new tools like AI/ML may be even greater.

The prevailing challenge of AI/ML in health care is one of balance and trade-offs, like how to capture the potential of an application in the as-is reality of clinical workflows. Health care organizations need to balance AI use cases that provide incremental value today with those that will be transformational in the future.

For more on the future of artificial intelligence in health care, visitwww.advisory.com.

© 2022 Advisory Board. All rights reserved.

This article does not constitute professional legal advice. Advisory Board does not endorse any companies, organizations, or their products as identified or mentioned herein. Advisory Board strongly recommends consulting legal counsel before implementing any practices contained in this article or making any decisions regarding suppliers and providers.