TOPOLOGICAL ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA PROJECTED META LEARNING

被引:1
|
作者
Aburidi, Mohammed [1 ]
Marcia, Roummel [1 ]
机构
[1] Univ Calif Merced, Dept Appl Math, Merced, CA 95343 USA
关键词
Graph Neural Networks; Meta-Learning; Adversarial Attack; Adversarial Training; Defense; Focal Loss;
D O I
10.1109/EAIS58494.2024.10569101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated significant success across diverse domains like social and biological networks. However, their susceptibility to adversarial attacks presents substantial risks in security-sensitive contexts. Even imperceptible perturbations within graphs can lead to considerable performance degradation, highlighting the urgent need for robust GNN models to ensure safety and privacy in critical applications. To address this challenge, we propose training-time optimization-based attacks on GNNs, specifically targeting modifications to graph structures. Our approach revolves around utilizing meta-gradients to tackle the two-level problem inherent in training-time attacks. This involves treating the graph as a hyperparameter to optimize, followed by leveraging convex relaxation and projected momentum optimization techniques to generate the attacks. In our evaluation on node classification tasks, our attacks surpass state-of-the-art methods within the same perturbation budget, underscoring the effectiveness of our approach. Our experiments consistently demonstrate that even minor graph perturbations result in a significant performance decline for graph convolutional networks. Our attacks do not require any prior knowledge of or access to the target classifiers. This research contributes significantly to bolstering the resilience of GNNs against adversarial manipulations in real-world scenarios.
引用
收藏
页码:330 / 337
页数:8
相关论文
共 50 条
  • [41] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [42] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [43] Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning
    Akers, Matthew
    Barton, Armon
    COMPUTER, 2024, 57 (01) : 88 - 99
  • [44] Adversarial Attacks on Graph Classification via Bayesian Optimisation
    Wan, Xingchen
    Kenlay, Henry
    Ru, Binxin
    Blaas, Arno
    Osborne, Michael A.
    Dong, Xiaowen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Adversarial Attacks on Node Embeddings via Graph Poisoning
    Bojchevski, Aleksandar
    Guennemann, Stephan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [46] Robust Graph Neural Networks via Ensemble Learning
    Lin, Qi
    Yu, Shuo
    Sun, Ke
    Zhao, Wenhong
    Alfarraj, Osama
    Tolba, Amr
    Xia, Feng
    MATHEMATICS, 2022, 10 (08)
  • [47] Streaming Graph Neural Networks via Continual Learning
    Wang, Junshan
    Song, Guojie
    Wu, Yi
    Wang, Liang
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1515 - 1524
  • [48] The identical distribution hypothesis is equivalent to the parameter discrepancy hypothesis: Adversarial attacks on graph neural networks
    Wu, Yiteng
    Liu, Wei
    Yu, Xuqiao
    INFORMATION SCIENCES, 2023, 623 : 481 - 492
  • [49] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741
  • [50] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Wenjie YAN
    Ziqi LI
    Yongjun QI
    Chinese Journal of Electronics, 2024, 33 (03) : 732 - 741