Much proselytizing has occurred regarding the value and future of artificial intelligence (AI) and machine learning in healthcare. The industry is burgeoning. As with blockchain technology, which continues to evolve in the healthcare marketplace, AI and machine learning are constructs that require a bit of near-term expectation management. While their efficacy and value will improve with time, they are not the magic bullet (at present) that will answer the myriad care and cost delivery questions surrounding healthcare in the United States. Owing to space constraints this column is an overly simplistic contemplation of AI. As prologue to this article, I am not an AI programmer, don’t play in Python, and have never built a machine learning algorithm. That said, I do have 30 years of practical experience in the healthcare trenches and have dealt with information technology (IT) systems and applications in that time, such as culling quality data and outcomes from electronic medical record (EMR) systems and deploying rudimentary analytics. I also have a fairly extensive background in IT.
Preamble aside, last year, when blockchain was casually bandied about, I suggested that solid deployment of blockchain technology in healthcare would take some time due to significant disparity in the care delivery system and the multitudes of inputs and variables. Use/deployment of blockchain is predicated on targeted problems with common agreed-upon data sets. Generally, the same can be said of AI. Is that to say that AI, machine learning, and blockchain will not play a role in the future of healthcare? Certainly not. I believe they will play a significant role. However, short-term challenges will continue as robust IT offerings are unveiled. AI, machine learning, blockchain, and other cutting-edge technologies, are needed to advance the delivery and coordination of care, squeeze costs and redundancy out of the “system,” and help ensure repeatable quality outcomes. But few technologies are perfect, and most require time to germinate as they grow in use and scalability.
For the sake of this article we should expound on our definitions. As with telehealth, where people often use telehealth and telemedicine interchangeably, many people toss AI and machine learning into the same bucket. I’d herewith suggest that many components fall under the AI umbrella, including machine learning. With AI, machines mimic human cognitive functions. Under that arch, AI includes machine learning, natural languages processing (NLP), and “reasoning.” With machine learning, machines have no explicit instructions but extrapolate and determine patterns in large chunks of data. “Reasoning” is stored information combined with rules and is utilized to make deductions. NLP is the processing, analyzing, understanding, and generating of natural human languages. Machines can be taught to learn and discern between items. For instance, coding can be deployed to identify different leaves (not sure why you’d do that – absurd example). Each leaf has data element differentiators that help the computer “learn” what the types of leaves are. The computer can then, over time, pick an oak leaf from a maple leaf, for instance. But the computer knows none of this unless it is “told” what these items are and how they are defined. The inputs must be sound, and the algorithms must be written with background knowledge and understanding about the underlying issue at hand (e.g., the differences between and oak leaf and a maple leaf).
And that can be the rub. Subject matter experts (SMEs) and data scientists must work hand in glove to delineate the problem to be solved, the data needed, and the nurturing of the algorithms to ensure they remain relevant. Bad “training” of the computer and bad data inputs lead to bad and/or inaccurate outputs.
Figure 1 below shows how these components live under the greater AI umbrella.
How does a bad construct present itself? As an apolitical consideration, we’ve recently seen how bad data inputs lead to bad outputs. A variety of recent COVID-19 projections by certain entities were grossly inaccurate, overestimating infection rates and deaths. While not AI, per se, certainly the algorithms, logic, and data inputs had flaws leading to calamitously inaccurate results. Again, bad or misunderstood inputs and bad algorithms can lead to bad outputs.
Lest you think me a naysayer, I’ll reemphasize that I believe AI will play an increasingly larger role in healthcare delivery; it’s a matter of time and necessity. The key is in the development, build, and parameters of the logic data scientists and SMEs (e.g., clinicians and healthcare executives) must communicate clearly. The prospect of SMEs not clearly delineating their needs and inputs will lead programmers in the wrong direction, building structural errors into algorithms that will effectively keep the machine from “learning” the right responses and outputs. Thus, not only is a quality output predicated on sound algorithms (from the programmers) but also the right inputs to empower the machine to “learn” and provide actionable insights and/or render decisions.
AI missteps are bad enough in businesses, but consider the life-and-death ramifications if you have deployed, say, a cardiology AI protocol that does not have all the right inputs and parameters built in. As I’ve noted before, and was discussed in a recent Forbes.com article (Blockchain Technology May [Eventually] Fix Healthcare: Just Don’t Hold Your Breath, March 2019), AI snafus exist. “In July 2018, StatNews reviewed internal IBM documents and found that IBM’s Watson was offering erroneous, sometimes dangerous cancer treatment advice.”[i] This statement is in no way meant to impugn IBM and Watson but to instead point out the downside of AI in healthcare. (In a blog post titled “Setting the Record Straight[ii],” IBM responded to some of this media coverage by saying that it is inaccurate to suggest Watson “has not made ‘enough’” progress on bringing the benefits of AI to healthcare.)
That said, bite-sized business use cases may prove more approachable in the near term. For instance, in healthcare, a well-defined project may focus on patient outmigration in an accountable care organization (ACO) that has downside financial risk. The defined/required output might be quantifying the ACO’s financial risk, delineating the clinician(s) who refer out as a matter of course, and identifying the clinic location(s) those referrals leave from and go to. This is a specific use case with a defined outcome/goal that is actionable.
Arguably in healthcare the “output” matters considerably more than in a widget manufacturing facility. In addition, according to an IDC survey, one in four companies see an almost 50% failure rate of their AI initiatives.[iii]
AI May Drive Healthcare Success
AI will continue to grow in use and value in healthcare. Whether it’s in predictive analytics for disease states, cash flow on the revenue cycle side of the business, or value-based care initiatives, AI is here to stay. However, success factors for the growth of AI in healthcare may include, but not be limited to:
· a sound, defined business case (eat the elephant in small bites)
· clear communications of expected outputs between SMEs and data scientists
· sound, clean data
· model scalability
The future for AI in healthcare looks bright. It’s application is just a marathon and not a sprint.
LexalyticsStories of AI Failure and How to Avoid Similar AI Fails – Lexalytics
Watson Health PerspectivesWatson Health: setting the record straight – Watson Health Perspectives
[iii] Artificial Intelligence Global Adoption Trends & Strategies, IDC, July 2019