TOPOLOGICAL ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA PROJECTED META LEARNING

被引:1
|
作者
Aburidi, Mohammed [1 ]
Marcia, Roummel [1 ]
机构
[1] Univ Calif Merced, Dept Appl Math, Merced, CA 95343 USA
关键词
Graph Neural Networks; Meta-Learning; Adversarial Attack; Adversarial Training; Defense; Focal Loss;
D O I
10.1109/EAIS58494.2024.10569101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated significant success across diverse domains like social and biological networks. However, their susceptibility to adversarial attacks presents substantial risks in security-sensitive contexts. Even imperceptible perturbations within graphs can lead to considerable performance degradation, highlighting the urgent need for robust GNN models to ensure safety and privacy in critical applications. To address this challenge, we propose training-time optimization-based attacks on GNNs, specifically targeting modifications to graph structures. Our approach revolves around utilizing meta-gradients to tackle the two-level problem inherent in training-time attacks. This involves treating the graph as a hyperparameter to optimize, followed by leveraging convex relaxation and projected momentum optimization techniques to generate the attacks. In our evaluation on node classification tasks, our attacks surpass state-of-the-art methods within the same perturbation budget, underscoring the effectiveness of our approach. Our experiments consistently demonstrate that even minor graph perturbations result in a significant performance decline for graph convolutional networks. Our attacks do not require any prior knowledge of or access to the target classifiers. This research contributes significantly to bolstering the resilience of GNNs against adversarial manipulations in real-world scenarios.
引用
收藏
页码:330 / 337
页数:8
相关论文
共 50 条
  • [31] Centered-Ranking Learning Against Adversarial Attacks in Neural Networks
    Appiah, Benjamin
    Adu, Adolph S. Y.
    Osei, Isaac
    Assamah, Gabriel
    Hammond, Ebenezer N. A.
    International Journal of Network Security, 2023, 25 (05) : 814 - 820
  • [32] Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks
    Ismail Alarab
    Simant Prakoonwit
    Soft Computing, 2023, 27 : 7925 - 7937
  • [33] Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification
    Lin, Xixun
    Zhou, Chuan
    Wu, Jia
    Yang, Hong
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    PATTERN RECOGNITION, 2023, 133
  • [34] Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks
    Alarab, Ismail
    Prakoonwit, Simant
    SOFT COMPUTING, 2023, 27 (12) : 7925 - 7937
  • [35] Graph-Fraudster: Adversarial Attacks on Graph Neural Network-Based Vertical Federated Learning
    Chen, Jinyin
    Huang, Guohan
    Zheng, Haibin
    Yu, Shanqing
    Jiang, Wenrong
    Cui, Chen
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (02) : 492 - 506
  • [36] Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation
    He, Huarui
    Wang, Jie
    Zhang, Zhanqiu
    Wu, Feng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 534 - 544
  • [37] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [38] Learning Topological Horseshoe via Deep Neural Networks
    Yang, Xiao-Song
    Cheng, Junfeng
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 2024, 34 (04):
  • [39] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [40] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473