Explaining anomalies detected by autoencoders using Shapley Additive Explanations

被引:167
|
作者
Antwarg, Liat [1 ]
Miller, Ronnie Mindlin [1 ]
Shapira, Bracha [1 ]
Rokach, Lior [1 ]
机构
[1] Ben Gurion Univ Negev, Dept Informat & Software Syst Engn, Beer Sheva, Israel
关键词
Explainable black-box models; XAI; Autoencoder; Shapley values; SHAP; Anomaly detection; NETWORK;
D O I
10.1016/j.eswa.2021.115736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a "perfect"autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Detection of Monkeypox Cases Based on Symptoms Using XGBoost and Shapley Additive Explanations Methods
    Farzipour, Alireza
    Elmi, Roya
    Nasiri, Hamid
    DIAGNOSTICS, 2023, 13 (14)
  • [22] Exploring kinase family inhibitors and their moiety preferences using deep SHapley additive exPlanations
    Fan, You-Wei
    Liu, Wan-Hsin
    Chen, Yun-Ti
    Hsu, Yen-Chao
    Pathak, Nikhil
    Huang, Yu-Wei
    Yang, Jinn-Moon
    BMC Bioinformatics, 2022, 23
  • [23] Electricity Consumption Forecasting: An Approach Using Cooperative Ensemble Learning with SHapley Additive exPlanations
    Alba, Eduardo Luiz
    Oliveira, Gilson Adamczuk
    Ribeiro, Matheus Henrique Dal Molin
    Rodrigues, erick Oliveira
    FORECASTING, 2024, 6 (03): : 839 - 863
  • [24] Exploring kinase family inhibitors and their moiety preferences using deep SHapley additive exPlanations
    You-Wei Fan
    Wan-Hsin Liu
    Yun-Ti Chen
    Yen-Chao Hsu
    Nikhil Pathak
    Yu-Wei Huang
    Jinn-Moon Yang
    BMC Bioinformatics, 23
  • [25] Exploring kinase family inhibitors and their moiety preferences using deep SHapley additive exPlanations
    Fan, You-Wei
    Liu, Wan-Hsin
    Chen, Yun-Ti
    Hsu, Yen-Chao
    Pathak, Nikhil
    Huang, Yu-Wei
    Yang, Jinn-Moon
    BMC BIOINFORMATICS, 2022, 23 (SUPPL 4)
  • [26] Landslide Modeling in a Tropical Mountain Basin Using Machine Learning Algorithms and Shapley Additive Explanations
    Vega, Johnny
    Sepulveda-Murillo, Fabio Humberto
    Parra, Melissa
    AIR SOIL AND WATER RESEARCH, 2023, 16
  • [27] Operating Key Factor Analysis of a Rotary Kiln Using a Predictive Model and Shapley Additive Explanations
    Mun, Seongil
    Yoo, Jehyeung
    ELECTRONICS, 2024, 13 (22)
  • [28] Assessment of the Impact of Meteorological Variables on Lake Water Temperature Using the SHapley Additive exPlanations Method
    Amnuaylojaroen, Teerachai
    Ptak, Mariusz
    Sojka, Mariusz
    WATER, 2024, 16 (22)
  • [29] An artificial neural network-pharmacokinetic model and its interpretation using Shapley additive explanations
    Ogami, Chika
    Tsuji, Yasuhiro
    Seki, Hiroto
    Kawano, Hideaki
    To, Hideto
    Matsumoto, Yoshiaki
    Hosono, Hiroyuki
    CPT-PHARMACOMETRICS & SYSTEMS PHARMACOLOGY, 2021, 10 (07): : 760 - 768
  • [30] Combining categorical boosting and Shapley additive explanations for building an interpretable ensemble classifier for identifying mineralization-related geochemical anomalies
    Chen, Yongliang
    Chen, Bowen
    Shayilan, Alina
    ORE GEOLOGY REVIEWS, 2024, 173