Defending adversarial attacks in Graph Neural Networks via tensor enhancement

被引:1
|
作者
Zhang, Jianfu [1 ,3 ]
Hong, Yan [4 ]
Cheng, Dawei [5 ]
Zhang, Liqing [2 ]
Zhao, Qibin [3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[3] RIKEN AIP, Tokyo, Japan
[4] Ant Grp, Hangzhou, Peoples R China
[5] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph Neural Networks; Adversarial robustness; Tensor decomposition;
D O I
10.1016/j.patcog.2024.110954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated remarkable success across diverse fields, yet remain susceptible to subtle adversarial perturbations that significantly degrade performance. Addressing this vulnerability remains a formidable challenge. Current defense strategies focus on edge-specific regularization within adversarial graphs, often overlooking the inter-edge structural dependencies and the interplay of various robustness attributes. This paper introduces a novel tensor-based framework for GNNs, aimed at reinforcing graph robustness against adversarial influences. By employing tensor approximation, our method systematically aggregates and compresses diverse predefined robustness features of adversarial graphs into a low-rank representation. This approach harmoniously combines the integrity of graph structure and robustness characteristics. Comprehensive experiments on real-world graph datasets demonstrate that our framework not only effectively counters diverse types of adversarial attacks but also surpasses existing leading defense mechanisms in performance.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39
  • [2] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [4] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [5] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [6] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [7] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250
  • [8] Exploratory Adversarial Attacks on Graph Neural Networks
    Lin, Xixun
    Zhou, Chuan
    Yang, Hong
    Wu, Jia
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1136 - 1141
  • [9] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2847 - 2856
  • [10] Defending Against Adversarial Attacks via Neural Dynamic System
    Li, Xiyuan
    Zou, Xin
    Liu, Weiwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,