Defending adversarial attacks in Graph Neural Networks via tensor enhancement

被引:1
|
作者
Zhang, Jianfu [1 ,3 ]
Hong, Yan [4 ]
Cheng, Dawei [5 ]
Zhang, Liqing [2 ]
Zhao, Qibin [3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[3] RIKEN AIP, Tokyo, Japan
[4] Ant Grp, Hangzhou, Peoples R China
[5] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph Neural Networks; Adversarial robustness; Tensor decomposition;
D O I
10.1016/j.patcog.2024.110954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated remarkable success across diverse fields, yet remain susceptible to subtle adversarial perturbations that significantly degrade performance. Addressing this vulnerability remains a formidable challenge. Current defense strategies focus on edge-specific regularization within adversarial graphs, often overlooking the inter-edge structural dependencies and the interplay of various robustness attributes. This paper introduces a novel tensor-based framework for GNNs, aimed at reinforcing graph robustness against adversarial influences. By employing tensor approximation, our method systematically aggregates and compresses diverse predefined robustness features of adversarial graphs into a low-rank representation. This approach harmoniously combines the integrity of graph structure and robustness characteristics. Comprehensive experiments on real-world graph datasets demonstrate that our framework not only effectively counters diverse types of adversarial attacks but also surpasses existing leading defense mechanisms in performance.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks
    Ismail Alarab
    Simant Prakoonwit
    Soft Computing, 2023, 27 : 7925 - 7937
  • [42] Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification
    Lin, Xixun
    Zhou, Chuan
    Wu, Jia
    Yang, Hong
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    PATTERN RECOGNITION, 2023, 133
  • [43] Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks
    Alarab, Ismail
    Prakoonwit, Simant
    SOFT COMPUTING, 2023, 27 (12) : 7925 - 7937
  • [44] Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation
    He, Huarui
    Wang, Jie
    Zhang, Zhanqiu
    Wu, Feng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 534 - 544
  • [45] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [46] Fight Perturbations With Perturbations: Defending Adversarial Attacks via Neuron Influence
    Chen, Ruoxi
    Jin, Haibo
    Zheng, Haibin
    Chen, Jinyin
    Liu, Zhenguang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1582 - 1595
  • [47] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [48] Edge-based Tensor prediction via graph neural networks
    Zhong, Yang
    Yu, Hongyu
    Gong, Xingao
    Xiang, Hongjun
    arXiv, 2022,
  • [49] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [50] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26