Breaking News

Mastering artificial intelligence and machine learning

Just a few decades ago, artificial intelligence (AI) was the stuff of science fiction, but has since become part of our daily lives. In manufacturing, it identifies anomalies in the production process. For banks, it makes decisions regarding loans. And on Netflix, it finds just the right film for every user. All of this is made possible by highly complex algorithms that work covertly in the background. The more challenging the problem, the more complex the AI model – and the more inscrutable.

However, users want to be able to understand how a decision has been made, particularly with critical applications: Why was the work piece rejected? What caused the damage to my machine? It is only by understanding the reason for decisions that improvements can be made – and this increasingly applies to safety, too. In addition, the EU General Data Protection Regulation stipulates that decisions must be transparent.

Software comparison for xAI

In order to solve this problem, an entirely new area of research has arisen: “Explainable Artificial Intelligence”, or xAI for short. There are now numerous digital aids on the market which explain complex AI approaches. For example, in an image, they mark up the pixel which led to parts being rejected. The Stuttgart-based experts at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA have now compared and evaluated nine common explanation techniques – including LIME, SHAP and Layer-Wise Relevance Propagation – by means of exemplary applications. There were three main criteria for this:

  • Stability: The program should always generate the same explanation for the same problem. For example, an explanation should never flag up sensor A and then sensor B if the same anomaly arises in a production machine. This would harm trust in the algorithm and make it difficult to know what course of action to take.
  • Consistency: At the same time, only slightly different input data should also receive similar explanations.
  • Fidelity: Additionally, it is especially important that explanations are representative of how the AI model actually functions. For example, the explanation for the rejection of a bank loan should not be that the customer is too old when the reason was that their income was too low.

The use case is crucial

The study found all explanation methods researched were viable. However, as Nina Schaaf, who is responsible for the study at Fraunhofer IPA, said: “There is no single perfect method.” For example, significant differences emerge in the time needed for generating explanations. The respective objective also largely determines which software is best to use. For example, Layer-Wise Relevance Propagation and Integrated Gradients are particularly good for image data. In summary, Schaaf says: “The target group is also important when it comes to explanations: An AI-developer will want and should receive an explanation phrased differently to the production manager, as both will draw different conclusions from the explanation.”

– Edited from a Fraunhofer IPA press release by CFE Media.