March 26, 2020 – In the era of value-based healthcare, digital innovation, and big data, clinical decision support systems have become vital for organizations seeking to improve care delivery.
Clinical decision support (CDS) tools have the ability to analyze large volumes of data and suggest next steps for treatment, flagging potential problems and enhancing care team efficiency.
While these systems can add significant value to the healthcare industry, CDS technologies have also come with substantial challenges. Poorly implemented CDS tools that generate unnecessary alerts often result in alarm fatigue and clinician burnout, trends that can threaten patient safety and lead to worse outcomes.
“Clinical decision support tools have been around for a number of years, but many of them have been somewhat standalone solutions and not well-integrated into the clinical point of care devices that people are using,” Katherine Andriole, director of research strategy and operations at the MGH & BWH Center for Clinical Data Science (CCDS), told HealthITAnalytics.
Vendors, researchers, and developers have worked to overcome these issues, aiming to design solutions that are intuitive, informative, and efficient. At the core of many of these improved CDS tools are technologies that have long occupied the minds of healthcare tech enthusiasts: artificial intelligence and machine learning.
“Clinical decision support tools have been around for a number of years, but many of them have been somewhat standalone solutions and not well-integrated into the clinical point of care devices that people are using.”
“A lot of these mathematical techniques have been around for a long time, but the reason that we’re seeing them come to fore now is that things have gone digital,” said Andriole, who is also an associate professor of Radiology at Harvard Medical School, Brigham and Women’s Hospital.
“We used to do radiology on film. Now we have imaging digitally. We used to have a patient chart that was paper in a folder, now charts are electronic. We have computing that is much faster than what we had, say, 20 years ago, when training machine learning models was very computationally intensive.”
Even with all these advancements, however, the industry still struggles with several foundational problems. Limited data access, a lack of provider education and training, and poor technology integration are all obstacles that many organizations have yet to overcome.
Applying machine learning and other analytics tools to CDS systems will require stakeholders to address these challenges, leading to more informed decision-making and better patient care.
Improving data quality to build quality algorithms
Machine learning and CDS tools are most effective when they are trained on data that is accurate, clean, and complete. After all, an algorithm’s output is only as good as its input, and in the high-stakes industry of healthcare, the input has to be pretty precise.
Mark Sendak, Population Health and Data Science Lead, DukeHealth
However, as most healthcare professionals know, medical information isn’t always stored in a standardized way. Data inaccuracies and missing information are all too common, meaning organizations have a lot of work to do before they can even start to develop CDS algorithms.
“You hear a lot about data quality. As the saying goes, garbage in, garbage out,” said Mark Sendak, MD, population health and data science lead at the Duke Institute for Health Innovation.
“Electronic health records and the data within them are not necessarily designed for downstream use in algorithms. The user interfaces and databases are designed with other purposes in mind, so there’s a lot we have to do to curate and transform data from its raw format into something that we can use in machine learning algorithms.”
Sendak and his team recently developed a machine learning model to predict adult patients’ risk of in-hospital mortality. Before building the tool, the group spent time gathering data and identifying which settings within which hospitals had better or worse mortality rates. While collecting information, researchers discovered that they were missing come crucial data points.
“People often say that mortality is a ‘hard outcome,’ which is something that you can measure and see clearly. What was a little bit surprising was that we don’t actually have complete death data, especially for patients who are discharged from the hospital, and this is true of many institutions,” Sendak noted.
“If someone is deceased or becomes deceased within a healthcare facility that we operate, we tend to have very accurate, comprehensive mortality data. But if someone is discharged and that person dies – whether it’s one week or 10 years after being discharged – and they don’t die in a healthcare facility, there are a lot of gaps in mortality data.”
“Electronic health records and the data within them are not necessarily designed for downstream use in algorithms.”
These data gaps are a major barrier in the machine learning development process, Andriole stated.
“One of the biggest challenges in training algorithms for machine learning is gaining access to large amounts of data,” she said. “Because there can be security and privacy issues with patient information, not everyone has a great supply of data they can use to train these models.”
In Sendak’s case, he and his team were able to collaborate with local organizations to fill in the mortality data gaps, with great results.
“We worked with our state health department to get data through the vital statistics office, which you can do as a research institution for different uses, and we were able to get state-level data,” he said.
“The results showed that the model performed on par with state-of-the-art methods. Our goal is that maybe this model will be able to identify patients who have a gap in care, and then we would recommend that these patients have a goals of care conversation during the admission.”
Ronald Summers, MD, NIH Clinical Center
For other organizations, freely accessible datasets may be a viable resource for developing comprehensive CDS tools. Ronald Summers, MD, PhD, senior investigator of the Imaging Biomarkers and Computer-Aided Diagnostics Laboratory at the NIH Clinical Center, recently conducted a study in which his team aimed to extract information from CT scans that providers could use to gain further insights into patient health.
Researchers used publicly available data to train a deep learning model and found that the model was able to accurately identify and analyze certain biomarkers on CT scans, providing clinicians with more actionable decision-making information.
“We had to use manual assessment for the validation of each of the biomarkers, so that meant somebody had to sit down and either trace the edges of livers on CT scans or trace muscles, which is time-consuming and tedious,” Summers explained.
“To speed up this process, we used anonymized public data sets of traced organs, and we taught a deep learning algorithm how to find our particular biomarkers of interest on the CT scans.”
As more health systems seek to leverage AI and other analytics technologies to improve their CDS capabilities, public datasets like these will help accelerate the process of algorithm development.
“Academic institutions are talking about whether we can partner and create datasets that people can use to train their models. Because we know that contributing more labeled and preprocessed data will help move the field forward,” Andriole said.
Leveraging AI to develop workflow-friendly tools
Most of the hype that surrounds machine learning in CDS is caused by expectations of advanced, hyper-intelligent tools that can flawlessly detect tumors, lesions, or any other signs of illness.
Numerous studies have demonstrated the ability of AI and other analytics tools to predict kidney disease, identify breast cancer, and accurately forecast leukemia remission rates.
Although it’s easy to get swept up in the excitement about the potential of machine learning in healthcare, organizations should take a more pragmatic stance, Summers said.
“Researchers will continue to identify areas where these tools could be clinically beneficial, but the provider community needs to think about how the information developed by these AI systems can be put into practice in a way that improves care,” he stated.
In reality, most organizations are aiming to use machine learning for more mundane CDS tasks – at least for right now.
“A lot of people are focused on using AI for diagnostic clinical decision support, where the model would provide additional information to clinicians to help them make their decision,” Andriole said.
“But it’s the workflow and administrative kinds of models – the ones that help with things like patient scheduling or predicting patient no-shows – that are already making an impact. At the center, we focus on a number of things that are not necessarily difficult diagnostic problems, but they are things that might improve the workflow in some way.”
Katherine Andriole, PhD, Center for Clinical Data Science
For example, researchers at CCDS have developed a machine learning algorithm that can detect motion when a patient is undergoing an MRI scan. If a patient moves too much during a scan, the image may be unusable, resulting in a patient having to return to their provider to get another one.
“Our imaging artifact detection tool allows us to fix the problem while the patient is still with us and still in the scanner. We can just go ahead and re-scan them rather than go through the process and cost of having them come back in because their imaging wasn’t adequate,” Andriole explained.
“We’re going to see some of these decision support or value-added tools put into the scanners, as well as some of the tools that we use at the point of care and in radiology.”
For Sendak, tools that will optimize providers’ day-to-day job functions are top of mind. Understanding how information moves through the system is critical for improving care decisions, he emphasized.
“In our own system, it’s always about workflow. Who is the right person to share this information with, and when? At what time, and what are they going to do with it?” Sendak said.
“We try to think through the associated actions and decisions that people need to make. We ask, ‘Okay, are all patients identified by the model and reviewed by a clinician confirmed to have a goals of care conversation?’”
Including humans in the CDS design and implementation process is also essential for success, he noted. Organizations that rely only on advanced solutions to resolve major CDS pain points probably won’t see the best results.
“There’s lots of unhelpful, annoying clinical decision support. Machine learning itself will not solve the problems that clinical decision support already has, but it can make certain parts of clinical decision support more effective,” said Sendak.
“In almost all of our projects, we have a human in the loop. These are clinical decision support systems. They’re not driving clinical decisions, and models are wrong sometimes.”
Bigger teams, broader skillsets, better decisions
When building an algorithm that will help support clinical care decisions, it’s necessary to include individuals from all sectors of the healthcare industry. These tools will impact nearly everyone involved in the care delivery process, from providers and staff to patients themselves. Having a team that trusts these models will increase the chance that these algorithms will improve patient care.
“If an AI technique works well, it doesn’t necessarily mean that it will move from the bench to the bedside. There are a lot of factors that affect whether these techniques become available for clinical use. There are different people with different viewpoints and interests, and the process of making these tools available often requires skills outside those of the technology developers,” Summers said.
Collaborating on the best ways to integrate CDS tools with care practices will ensure clinicians can do their jobs successfully.
“Developing machine learning for CDS is a team sport,” said Andriole.
“You need machine learning experts. You need clinicians. You need to understand the clinical use case. We need to understand oftentimes when a model fails, a clinician can look and understand why. We really feel that having clinicians work alongside data scientists is one way we’re going to see advancement in this field.”
Going forward, healthcare organizations also may need to expand their pool of employees to include tech experts, Sendak added.
“You don’t typically think of health systems hiring teams of data scientists and data engineers. But the institutions that are leading the way in AI do have those jobs and those functions. It’s not just a technology investment, it’s an investment in people, skills, and capabilities,” he said.
“If an AI technique works well, it doesn’t necessarily mean that it will move from the bench to the bedside.”
Education and training will also play a key role, Andriole said.
“When new tools come along, we have to educate people on how to use them and how to assess the outputs. We don’t want clinicians to just blindly accept recommendations, but to analyze them and say, ‘Yeah, okay, this is what this means. I understand enough about how this works,’” she said.
“It’s not enough knowledge to build a model on their own, but they’ll understand enough to be able to use the tool.”
In the past, AI adoption in healthcare has been met with some degree of resistance by providers, partly due to valid concerns over the ethical implications of using these tools to deliver care.
However, recent research suggests that the tides may be changing. A global survey from Philips showed that 79 percent of healthcare professionals under 40 are confident that digital health technologies can achieve better patient outcomes, while 74 percent believe these tools will improve the patient experience.
As machine learning and clinical decision support continue to evolve, the next generation of providers will likely be well-equipped to understand and apply these tools in regular care delivery.
“People are very interested in learning about how they can use these methods to solve clinical problems,” Andriole said. “And it’s not just computer scientists and data scientists who are interested, but also a lot of our clinical trainees.”
In the not-so-distant future, machine learning and AI-fueled CDS tools just may become the healthcare industry’s standard.
“We may be able to start automating healthcare in the same ways that other industries have been automated,” Andriole concluded.
Sign up to receive our newsletter and access our resources