Understanding Convolutional Neural Networks From Excitations

被引:0
|
作者
Ying, Zijian [1 ]
Li, Qianmu [1 ]
Lian, Zhichao [1 ]
Hou, Jun [2 ]
Lin, Tong [3 ]
Wang, Tao [3 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyber Sci & Engn, Nanjing 210094, Peoples R China
[2] Nanjing Vocat Univ Ind Technol, Dept Social Sci, Nanjing 210023, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Local explanation; positive and negative excitations (PANEs); saliency map; explainable artificial intelligence (XAI);
D O I
10.1109/TNNLS.2024.3430978
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Saliency maps have proven to be a highly efficacious approach for explicating the decisions of convolutional neural networks (CNNs). However, extant methodologies predominantly rely on gradients, which constrain their ability to explicate complex models. Furthermore, such approaches are not fully adept at leveraging negative gradient information to improve interpretive veracity. In this study, we present a novel concept, termed positive and negative excitation (PANE), which enables the direct extraction of PANE for each layer, thus enabling complete layer-by-layer information utilization sans gradients. To organize these excitations into final saliency maps, we introduce a double-chain backpropagation procedure. A comprehensive experimental evaluation, encompassing both binary classification and multiclassification tasks, was conducted to gauge the effectiveness of our proposed method. Encouragingly, the results evince that our approach offers a significant improvement over the state-of-the-art methods in terms of salient pixel removal, minor pixel removal, and inconspicuous adversarial perturbation generation guidance. In addition, we verify the correlation between PANEs.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Towards Understanding the Invertibility of Convolutional Neural Networks
    Gilbert, Anna C.
    Zhang, Yi
    Lee, Kibok
    Zhang, Yuting
    Lee, Honglak
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1703 - 1710
  • [2] Understanding convolutional neural networks with a mathematical model
    Kuo, C. -C. Jay
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 41 : 406 - 413
  • [3] Toward Understanding Convolutional Neural Networks from Volterra Convolution Perspective
    Li, Tenghui
    Zhou, Guoxu
    Qiu, Yuning
    Zhao, Qibin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [4] Predicting and Understanding Urban Perception with Convolutional Neural Networks
    Porzi, Lorenzo
    Bulo, Samuel Rota
    Lepri, Bruno
    Ricci, Elisa
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 139 - 148
  • [5] Convolutional neural networks in medical image understanding: a survey
    D. R. Sarvamangala
    Raghavendra V. Kulkarni
    Evolutionary Intelligence, 2022, 15 : 1 - 22
  • [6] Convolutional Recurrent Neural Networks for Better Image Understanding
    Vallet, Alexis
    Sakamoto, Hiroyasu
    2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 675 - 681
  • [7] Convolutional neural networks in medical image understanding: a survey
    Sarvamangala, D. R.
    Kulkarni, Raghavendra V.
    EVOLUTIONARY INTELLIGENCE, 2022, 15 (01) : 1 - 22
  • [8] Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
    Zhou, XueFei
    2ND INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2018), 2018, 1004
  • [9] Understanding of facial features in face perception: insights from deep convolutional neural networks
    Zhang, Qianqian
    Zhang, Yueyi
    Liu, Ning
    Sun, Xiaoyan
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 18
  • [10] Understanding tourists' urban images from Big Data using convolutional neural networks
    Wang, Bingxue
    Wang, Hanliang
    INTERNATIONAL CONFERENCE ON ENVIRONMENTAL REMOTE SENSING AND BIG DATA (ERSBD 2021), 2021, 12129