LRP2A: Layer-wise Relevance Propagation based Adversarial attacking for Graph Neural Networks

被引:3
|
作者
Liu, Li [1 ,2 ]
Du, Yong [1 ]
Wang, Ye [1 ]
Cheung, William K. [2 ]
Zhang, Youmin [1 ]
Liu, Qun [1 ]
Wang, Guoyin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Computat Intelligence, Chongqing 400065, Peoples R China
[2] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial attacks; Graph Neural Networks; Layer -wise relevance propagation; CLASSIFICATION;
D O I
10.1016/j.knosys.2022.109830
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) are widely utilized for graph data mining, attributable to their powerful feature representation ability. Yet, they are prone to adversarial attacks with only slight perturbations of input data, limiting their applicability to critical applications. Vulnerability analysis of GNNs is thus essential if more robust models are to be developed. To this end, a Layer-wise Relevance Propagation based Adversarial attacking (LRP2A) model is proposed1. Specifically, to facilitate applying LRP to the "black-box "victim model, we train a surrogate model based on a sophisticated re-weighting network. The LRP algorithm is then leveraged for unraveling "contributions "among the nodes in the downstream classification task. Furthermore, the graph adversarial attacking algorithm is intentionally designed to be both interpretable and efficient. Experimental results prove the effectiveness of the proposed attacking model on GNNs for node classification. Additionally, the adoption of LRP2A allows the choice of the adversarial attacking strategies on the GNN interpretable, which in turn can gain deeper insights on the GNN's vulnerability. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Activation Distribution-based Layer-wise Quantization for Convolutional Neural Networks
    Ki, Subin
    Kim, Hyun
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [42] Unsupervised Layer-Wise Model Selection in Deep Neural Networks
    Ludovic, Arnold
    Helene, Paugam-Moisy
    Michele, Sebag
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 915 - 920
  • [43] Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification
    Boehle, Moritz
    Eitel, Fabian
    Weygandt, Martin
    Ritter, Kerstin
    FRONTIERS IN AGING NEUROSCIENCE, 2019, 11
  • [44] Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks
    Jin, Xiaojie
    Chen, Yunpeng
    Dong, Jian
    Feng, Jiashi
    Yan, Shuicheng
    COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 733 - 749
  • [45] A neural network-based control chart for monitoring and interpreting autocorrelated multivariate processes using layer-wise relevance propagation
    Sun, Jinwen
    Zhou, Shiyu
    Veeramani, Dharmaraj
    QUALITY ENGINEERING, 2023, 35 (01) : 33 - 47
  • [46] Stochastic Neural Networks with Layer-Wise Adjustable Sequence Length
    Wang, Ziheng
    Reviriego, Pedro
    Niknia, Farzad
    Liu, Shanshan
    Gao, Zhen
    Lombardi, Fabrizio
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON NANOTECHNOLOGY, NANO 2024, 2024, : 436 - 441
  • [47] Layer-Wise Training to Create Efficient Convolutional Neural Networks
    Zeng, Linghua
    Tian, Xinmei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 631 - 641
  • [48] SLRP: Improved heatmap generation via selective layer-wise relevance propagation
    Jung, Yeon-Jee
    Han, Seung-Ho
    Choi, Ho-Jin
    ELECTRONICS LETTERS, 2021, 57 (10) : 393 - 396
  • [49] LAYER-WISE ADAPTIVE GRAPH CONVOLUTION NETWORKS USING GENERALIZED PAGERANK
    Wimalawarne, Kishan
    Suzuki, Taiji
    arXiv, 2021,
  • [50] Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization
    Bassi, Pedro R. A. S.
    Dertkigil, Sergio S. J.
    Cavalli, Andrea
    NATURE COMMUNICATIONS, 2024, 15 (01)