Explaining anomalies detected by autoencoders using Shapley Additive Explanations

被引:167
|
作者
Antwarg, Liat [1 ]
Miller, Ronnie Mindlin [1 ]
Shapira, Bracha [1 ]
Rokach, Lior [1 ]
机构
[1] Ben Gurion Univ Negev, Dept Informat & Software Syst Engn, Beer Sheva, Israel
关键词
Explainable black-box models; XAI; Autoencoder; Shapley values; SHAP; Anomaly detection; NETWORK;
D O I
10.1016/j.eswa.2021.115736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a "perfect"autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Shapley-Additive-Explanations-Based Factor Analysis for Dengue Severity Prediction using Machine Learning
    Chowdhury, Shihab Uddin
    Sayeed, Sanjana
    Rashid, Iktisad
    Alam, Md Golam Rabiul
    Masum, Abdul Kadar Muhammad
    Dewan, M. Ali Akber
    JOURNAL OF IMAGING, 2022, 8 (09)
  • [42] Elucidating microbubble structure behavior with a Shapley Additive Explanations neural network algorithm
    Zhuo, Qingxia
    Zhang, Linfei
    Wang, Lei
    Liu, Qinkai
    Zhang, Sen
    Wang, Guanjun
    Xue, Chenyang
    OPTICAL FIBER TECHNOLOGY, 2024, 88
  • [43] Shapley Additive Explanations for Text Classification and Sentiment Analysis of Internet Movie Database
    Dewi, Christine
    Tsai, Bing-Jun
    Chen, Rung-Ching
    RECENT CHALLENGES IN INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2022, 2022, 1716 : 69 - 80
  • [44] Leveraging Shapley Additive Explanations for Feature Selection in Ensemble Models for Diabetes Prediction
    Mohanty, Prasant Kumar
    Francis, Sharmila Anand John
    Barik, Rabindra Kumar
    Roy, Diptendu Sinha
    Saikia, Manob Jyoti
    BIOENGINEERING-BASEL, 2024, 11 (12):
  • [45] Gradient boosting and Shapley additive explanations for fraud detection in electricity distribution grids
    Santos, Ricardo N.
    Yamouni, Sami
    Albiero, Beatriz
    Vicente, Renato
    Silva, Juliano A.
    Souza, Tales F. B.
    Freitas Souza, Mario C. M.
    Lei, Zhili
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2021, 31 (09)
  • [46] Improved prediction of soil shear strength using machine learning algorithms: interpretability analysis using SHapley Additive exPlanations
    Ahmad, Mahmood
    Al Zubi, Mohammad
    Almujibah, Hamad
    Sabri, Mohanad Muayad Sabri
    Mustafvi, Jawad Bashir
    Haq, Shay
    Ouahbi, Tariq
    Alzlfawi, Abdullah
    FRONTIERS IN EARTH SCIENCE, 2025, 13
  • [47] Neural network models and shapley additive explanations for a beam-ring structure
    Sun, Ying
    Zhang, Luying
    Yao, Minghui
    Zhang, Junhua
    CHAOS SOLITONS & FRACTALS, 2024, 185
  • [48] Recognizing and explaining driving stress using a Shapley additive explanation model by fusing EEG and behavior signals
    Yang, Liu
    Zhou, Ruoling
    Li, Guofa
    Yang, Ying
    Zhao, Qianxi
    ACCIDENT ANALYSIS AND PREVENTION, 2025, 209
  • [49] An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach
    Nordin, Noratikah
    Zainol, Zurinahni
    Noor, Mohd Halim Mohd
    Chan, Lai Fong
    ASIAN JOURNAL OF PSYCHIATRY, 2023, 79
  • [50] Diabetes prediction using Shapley additive explanations and DSaaS over machine learning classifiers: a novel healthcare paradigm
    Guleria, Pratiyush
    Srinivasu, Parvathaneni Naga
    Hassaballah, M.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (14) : 40677 - 40712