WB-LRP: Layer-wise relevance propagation with weight-dependent baseline

被引:0
|
作者
Li, Yanshan [1 ]
Liang, Huajie
Zheng, Lirong
机构
[1] Shenzhen Univ, Guangdong Prov Key Lab Intelligent Informat Proc, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Layer-Wise Relevance Propagation (LRP); Interpretation; NETWORK;
D O I
10.1016/j.patcog.2024.110956
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
DeepLift is a special Layer-wise Relevance Propagation (LRP) algorithm that assigns importance to features by evaluating the impact of small perturbations in input features on the output. However, we discover that only perturbations parallel to the plane defined by weights and input features affect the output. To address this issue, we propose a new LRP algorithm that offers more accurate instance-level explanation for neural networks. Firstly, we derive the baselines for various existing LRP algorithms using DeepLift theory and identify that these baselines often disregard model weights, sample features, or both. Secondly, the Weight-dependent Baseline Layer-wise Relevance Propagation (WB-LRP) algorithm is proposed to address this problem, by deriving the norm-invariant rule (m-rule) according to our discovery. Then, the m-rule is extended to the positive norm-invariant rule (m+-rule), focusing specifically on the positive parts of the weights. Finally, we extend the WB-LRP theory to accommodate various baseline settings, creating a framework that integrates nearly all existing LRP algorithms. Experimental results show that the relevance interpretation depends on both model weights and sample features. The proposed WB-LRP algorithm can be designed with different baselines to adjust the proportions of model weights and sample features in the visualization results, thus enabling separate visualization of the relevance of sample features and model weights to the target category.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Beyond saliency: Understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
    Li, Heyi
    Tian, Yunke
    Mueller, Klaus
    Chen, Xin
    IMAGE AND VISION COMPUTING, 2019, 83-84 : 70 - 86
  • [42] An explainable brain tumor detection and classification model using deep learning and layer-wise relevance propagation
    Mandloi, Saurabh
    Zuber, Mohd
    Gupta, Rajeev Kumar
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (11) : 33753 - 33783
  • [43] An explainable brain tumor detection and classification model using deep learning and layer-wise relevance propagation
    Saurabh Mandloi
    Mohd Zuber
    Rajeev Kumar Gupta
    Multimedia Tools and Applications, 2024, 83 : 33753 - 33783
  • [44] Heatmap-based Explanation of YOLOv5 Object Detection with Layer-wise Relevance Propagation
    Karasmanoglou, Apostolos
    Antonakakis, Marios
    Zervakis, Michalis
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST 2022), 2022,
  • [45] An explainable multiscale LSTM model with wavelet transform and layer-wise relevance propagation for daily streamflow forecasting
    Tao, Lizhi
    Cui, Zhichao
    He, Yufeng
    Yang, Dong
    SCIENCE OF THE TOTAL ENVIRONMENT, 2024, 929
  • [46] Exploring Fine-Grained Feature Analysis for Bird Species Classification using Layer-wise Relevance Propagation
    Arquilla, Kyle
    Gajera, Ishan Dilipbhai
    Darling, Melanie
    Bhati, Deepshikha
    Singh, Aditi
    Guercio, Angela
    2024 IEEE 5TH ANNUAL WORLD AI IOT CONGRESS, AIIOT 2024, 2024, : 0625 - 0631
  • [47] Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization
    Bassi, Pedro R. A. S.
    Dertkigil, Sergio S. J.
    Cavalli, Andrea
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [48] Study of Coarse-to-Fine Class Activation Mapping Algorithms Based on Contrastive Layer-wise Relevance Propagation
    Sun, Hui
    Shi, Yulong
    Wang, Rui
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2023, 45 (04) : 1454 - 1463
  • [49] Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization
    Pedro R. A. S. Bassi
    Sergio S. J. Dertkigil
    Andrea Cavalli
    Nature Communications, 15
  • [50] Selection of the Main Control Parameters for the Dst Index Prediction Model Based on a Layer-wise Relevance Propagation Method
    Li, Y. Y.
    Huang, S. Y.
    Xu, S. B.
    Yuan, Z. G.
    Jiang, K.
    Wei, Y. Y.
    Zhang, J.
    Xiong, Q. Y.
    Wang, Z.
    Lin, R. T.
    Yu, L.
    ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES, 2022, 260 (01):