Path-Weights and Layer-Wise Relevance Propagation for Explainability of ANNs with fMRI Data

被引:1
|
作者
Marques dos Santos, Jose Diogo [1 ,2 ]
Marques dos Santos, Jose Paulo [3 ,4 ,5 ]
机构
[1] Univ Porto, Fac Engn, R Dr Roberto Frias, P-4200465 Porto, Portugal
[2] Univ Porto, Abel Salazar Biomed Sci Inst, R Jorge Viterbo Ferreira, P-4050313 Porto, Portugal
[3] Univ Maia, Ave Carlos de Oliveira Campos, P-4475690 Maia, Portugal
[4] Univ Porto, LIACC Artificial Intelligence & Comp Sci Lab, R Dr Roberto Frias, P-4200465 Porto, Portugal
[5] Univ Porto, Fac Med, Unit Expt Biol, Alameda Prof Hernani Monteiro, P-4200319 Porto, Portugal
关键词
Artificial neural networks (ANN); Explainable artificial intelligence (XAI); Layer-wise relevance propagation (LRP); Functional magnetic resonance imaging (fMRI); CEREBRAL-CORTEX; ORGANIZATION; NETWORK;
D O I
10.1007/978-3-031-53966-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of artificial neural networks (ANNs) to functional magnetic resonance imaging (fMRI) data has recently gained renewed attention for signal analysis, modeling the underlying processes, and knowledge extraction. Although adequately trained ANNs characterize by high predictive performance, the intrinsic models tend to be inscrutable due to their complex architectures. Still, explainable artificial intelligence (xAI) looks to find methods that can help to delve into ANNs' structures and reveal which inputs most contribute to correct predictions and how the networks unroll calculations until the final decision. Several methods have been proposed to explain the black-box ANNs' decisions, with layer-wise relevance propagation (LRP) being the current state-of-the-art. This study aims to investigate the consistency between LRP-based and path-weight-based analysis and how the network's pruning and retraining processes affect each method in the context of fMRI data analysis. The procedure is tested with fMRI data obtained in a motor paradigm. Both methods were applied to a fully connected ANN, and to pruned and retrained versions. The results show that both methods agree on the most relevant inputs for each stimulus. The pruning process did not lead to major disagreements. Retraining affected both methods similarly, exacerbating the changes initially observed in the pruning process. Notably, the inputs retained for the ultimate ANN are in accordance with the established neuroscientific literature concerning motor action in the brain, validating the procedure and explaining methods. Therefore, both methods can yield valuable insights for understanding the original fMRI data and extracting knowledge.
引用
收藏
页码:433 / 448
页数:16
相关论文
共 50 条
  • [1] Evaluating Layer-wise Relevance Propagation Explainability Maps for Artificial Neural Networks
    Ranguelova, Elena
    Pauwels, Eric J.
    Berkhout, Joost
    2018 IEEE 14TH INTERNATIONAL CONFERENCE ON E-SCIENCE (E-SCIENCE 2018), 2018, : 377 - 378
  • [2] Enhancing Explainability of Deep Reinforcement Learning Through Selective Layer-Wise Relevance Propagation
    Huber, Tobias
    Schiller, Dominik
    Andre, Elisabeth
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2019, 2019, 11793 : 188 - 202
  • [3] Explainability of deep reinforcement learning algorithms in robotic domains by using Layer-wise Relevance Propagation
    Taghian, Mehran
    Miwa, Shotaro
    Mitsuka, Yoshihiro
    Gunther, Johannes
    Golestan, Shadan
    Zaiane, Osmar
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [4] A Visual Designer of Layer-wise Relevance Propagation Models
    Huang, Xinyi
    Jamonnak, Suphanut
    Zhao, Ye
    Wu, Tsung Heng
    Xu, Wei
    COMPUTER GRAPHICS FORUM, 2021, 40 (03) : 227 - 238
  • [5] Layer-Wise Relevance Propagation with Conservation Property for ResNet
    Otsuki, Seitaro
    Iida, Tsumugi
    Doublet, Felix
    Hirakawa, Tsubasa
    Yamashita, Takayoshi
    Fujiyoshi, Hironobu
    Sugiura, Komei
    COMPUTER VISION-ECCV 2024, PT XLIII, 2025, 15101 : 349 - 364
  • [6] ASL: Adversarial Attack by Stacking Layer-wise Relevance Propagation
    Wang, Pengju
    Liu, Jing
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 796 - 801
  • [7] Interpretable molecular design based on layer-wise relevance propagation
    Kwon, Youngchun
    Kim, Kyungdoc
    Kim, Inkoo
    Yoo, Jiho
    Son, Won-Joon
    Choi, Youn-Suk
    Lee, Hyo Sug
    Shin, Jaikwang
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2019, 257
  • [8] Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
    Ullah, Ihsan
    Rios, Andre
    Gala, Vaibhav
    Mckeever, Susan
    APPLIED SCIENCES-BASEL, 2022, 12 (01):
  • [9] Interpreting Convolutional Neural Networks via Layer-Wise Relevance Propagation
    Jia, Wohuan
    Zhang, Shaoshuai
    Jiang, Yue
    Xu, Li
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 457 - 467
  • [10] Explaining Therapy Predictions with Layer-wise Relevance Propagation in Neural Networks
    Yang, Yinchong
    Tresp, Volker
    Wunderle, Marius
    Fasching, Peter A.
    2018 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2018, : 152 - 162