Breaking News

Why Explainable AI Is Key to CX Success

Robotic process automation is in its early days and therefore causes confusion. Here are some answers to your FAQ automation questions

PHOTO:
Evan Dennis

Even as artificial intelligence (AI) is quickly and definitively transforming the customer experience (CX), the field of explainable AI (XAI) is transforming AI itself. The difference between CX initiatives that include XAI and those that don’t is striking. It’s become clear every company should use XAI in CX initiatives.

XAI is critical to AI’s proper functioning because many AI algorithms operate opaquely inside a so-called “AI black box,” which means even the data scientists who design the AI can’t always tell you exactly why an AI agent makes a particular decision.

You may be thinking: Hasn’t the bot been programmed to evaluate only certain criteria and to render specific, programmed judgments based on those criteria? Yes — and no. We’ll get into those details below. But an even better question might be: Is it useful to know why your bot makes particular decisions? Here the answer is a resounding yes! We’ll get into those details below as well.

Why You Need to Know Why AI Takes Action

Unless you know why the AI agent did what it did, you may not suspect its results are off — let alone have a clue as to the adjustments you need to make. Knowing why gives you actionable insights or intelligence for addressing sticky problems in new ways. Knowing why might even make you aware of issues you had not realized existed.

Finally, answering “Why?” is about trusting your AI. You and, more important, your customers must be confident that your bots produce accurate results — and that requires analysis and transparency.

Every algorithm is only as good as its data. To get the precise (and productive) results you want, that data must be carefully selected, continually reality-tested, and frequently updated.

This is this data that informs the AI. You see, instead of simply executing a series of straightforward calculations, the AI evaluates the data presented to it, extracts general patterns, and chooses which of these to apply to a specific instance.

Clearly, a lot is going on and there are more than a few ways that algorithms can go astray.

Related Article: Dealing With AI Biases, Part 1: Acknowledging the Bias 

When AI Results in Bad Medicine

The real-world case study below illustrates AI’s hidden decision-making and how it could lead to calamitous results.

An AI healthcare model (in this case, a neural network) was designed to predict high-risk pneumonia cases. It had learned — counterintuitively — that “patients with a history of asthma are at low risk for pneumonia.” Understandably, this decision path got the attention of the hospital’s data scientists.

When they looked into it, the data scientists saw this asthma/pneumonia pattern did, in fact, exist in the data. The AI was right — but misinformed.

Here’s what happened: The clinic routinely admitted patients with a history of asthma directly to its intensive care unit (ICU). These patients rarely developed pneumonia, because they were continually observed and cared for in the ICU. But the model only learned the association between the history of asthma and the low risk of pneumonia — and ignored the salutary effects of being a patient in the ICU.

If this were an explainable AI, the error could have been ferreted out instantly.

You can learn more about this instance in the original study, “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission.”

Related Article: What Is Explainable AI?

Explainable AI Exposed

The customer service AI examples below illustrate how the latest XAI capabilities enable your bot’s intent models to tell you which phrases or words trigger a particular intent prediction.

Here, the bot receives a chat text and returns a “predicted” label. The “true” label is ascertained by a live agent. This new XAI capability highlights how much influence various words have on the bot’s predicted label.

color scheme
promotional query

The predicted and true labels align. We see the words that trigger the prediction make sense. There’s nothing to fix. This is a good confirmation.

But the example below is where things get interesting!

equip price query

The highlighting tells us while the AI gave “cost” a high weight, it downplayed “remote” … and that’s likely why the model predicts, wrongly, this interaction is a generic pricing query.

Armed with this information, you could work to fix the model by adding more data for this particular intent. Knowing why the model was wrong gives you an additional lens through which to look at your model and take appropriate corrective actions.

When You Know Why Your Bot Took an Action

XAI keeps AI honest, ensuring it transparently churns out the accurate and useful results you and your customers expect.

Using XAI is becoming ever more essential the more deeply AI pervades and impacts critical areas of our lives. CX isn’t typically considered a matter of life and death. But customers are your lifeblood. XAI is all about keeping your customer relationships alive and thriving.

Michelle Gregory leads the [24]7.ai Data Science Group. In her career of 20+ years, she has focused on the understanding and visualization of large amounts of data, including real-time social data.

Abhishek Ghose, Director, [24]7.ai Data Science Group, has been working in the field of artificial intelligence/machine learning (AI/ML) since 2007. While he specializes in natural language processing (NLP) and explainable AI, his interests include a diverse set of AI/ML areas.

Emma Thuong Nguyen is a [24]7.ai Data Scientist focusing on machine learning (ML) and natural language processing. Prior, she developed ML models for molecular simulations at University of California, San Diego.