Path-Weights and Layer-Wise Relevance Propagation for Explainability of ANNs with fMRI Data

被引:1
|
作者
Marques dos Santos, Jose Diogo [1 ,2 ]
Marques dos Santos, Jose Paulo [3 ,4 ,5 ]
机构
[1] Univ Porto, Fac Engn, R Dr Roberto Frias, P-4200465 Porto, Portugal
[2] Univ Porto, Abel Salazar Biomed Sci Inst, R Jorge Viterbo Ferreira, P-4050313 Porto, Portugal
[3] Univ Maia, Ave Carlos de Oliveira Campos, P-4475690 Maia, Portugal
[4] Univ Porto, LIACC Artificial Intelligence & Comp Sci Lab, R Dr Roberto Frias, P-4200465 Porto, Portugal
[5] Univ Porto, Fac Med, Unit Expt Biol, Alameda Prof Hernani Monteiro, P-4200319 Porto, Portugal
关键词
Artificial neural networks (ANN); Explainable artificial intelligence (XAI); Layer-wise relevance propagation (LRP); Functional magnetic resonance imaging (fMRI); CEREBRAL-CORTEX; ORGANIZATION; NETWORK;
D O I
10.1007/978-3-031-53966-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of artificial neural networks (ANNs) to functional magnetic resonance imaging (fMRI) data has recently gained renewed attention for signal analysis, modeling the underlying processes, and knowledge extraction. Although adequately trained ANNs characterize by high predictive performance, the intrinsic models tend to be inscrutable due to their complex architectures. Still, explainable artificial intelligence (xAI) looks to find methods that can help to delve into ANNs' structures and reveal which inputs most contribute to correct predictions and how the networks unroll calculations until the final decision. Several methods have been proposed to explain the black-box ANNs' decisions, with layer-wise relevance propagation (LRP) being the current state-of-the-art. This study aims to investigate the consistency between LRP-based and path-weight-based analysis and how the network's pruning and retraining processes affect each method in the context of fMRI data analysis. The procedure is tested with fMRI data obtained in a motor paradigm. Both methods were applied to a fully connected ANN, and to pruned and retrained versions. The results show that both methods agree on the most relevant inputs for each stimulus. The pruning process did not lead to major disagreements. Retraining affected both methods similarly, exacerbating the changes initially observed in the pruning process. Notably, the inputs retained for the ultimate ANN are in accordance with the established neuroscientific literature concerning motor action in the brain, validating the procedure and explaining methods. Therefore, both methods can yield valuable insights for understanding the original fMRI data and extracting knowledge.
引用
收藏
页码:433 / 448
页数:16
相关论文
共 50 条
  • [31] Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients
    Cik, Ivan
    Rasamoelina, Andrindrasana David
    Mach, Marian
    Sincak, Peter
    2021 IEEE 19TH WORLD SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI 2021), 2021, : 381 - 386
  • [32] Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
    Vukadin, Davor
    Afric, Petar
    Silic, Marin
    Delac, Goran
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (03)
  • [33] Hierarchical Neural Network with Layer-wise Relevance Propagation for Interpretable Multiclass Neural State Classification
    Ellis, Charles A.
    Sendi, Mohammad S. E.
    Willie, Jon T.
    Mahmoudi, Babak
    2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, : 351 - 354
  • [34] Feature Selection Based on Layer-Wise Relevance Propagation for EEG-based MI classification
    Nam, Hyeonyeong
    Kim, Jun-Mo
    Kam, Tae-Eui
    2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,
  • [35] Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis
    Grezmak, John
    Zhang, Jianjing
    Wang, Peng
    Loparo, Kenneth A.
    Gao, Robert X.
    IEEE SENSORS JOURNAL, 2020, 20 (06) : 3172 - 3181
  • [36] Layer-wise relevance propagation of InteractionNet explains protein-ligand interactions at the atom level
    Cho, Hyeoncheol
    Lee, Eok Kyun
    Choi, Insung S.
    SCIENTIFIC REPORTS, 2020, 10 (01)
  • [37] Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and Layer-wise Relevance Propagation
    Wehner, Christoph
    Powlesland, Francis
    Altakrouri, Bashar
    Schmid, Ute
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND PRACTICES IN ARTIFICIAL INTELLIGENCE, 2022, 13343 : 621 - 632
  • [38] ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation
    Du, Meng
    Bi, Daping
    Du, Mingyang
    Xu, Xinsong
    Wu, Zilong
    REMOTE SENSING, 2023, 15 (01)
  • [39] Deep Layer-wise Networks Have Closed-Form Weights
    Wu, Chieh
    Masoomi, Aria
    Gretton, Arthur
    Dy, Jennifer
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 188 - 225
  • [40] Beyond saliency: Understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
    Li, Heyi
    Tian, Yunke
    Mueller, Klaus
    Chen, Xin
    IMAGE AND VISION COMPUTING, 2019, 83-84 : 70 - 86