TOPOLOGICAL ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA PROJECTED META LEARNING

被引:1
|
作者
Aburidi, Mohammed [1 ]
Marcia, Roummel [1 ]
机构
[1] Univ Calif Merced, Dept Appl Math, Merced, CA 95343 USA
关键词
Graph Neural Networks; Meta-Learning; Adversarial Attack; Adversarial Training; Defense; Focal Loss;
D O I
10.1109/EAIS58494.2024.10569101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated significant success across diverse domains like social and biological networks. However, their susceptibility to adversarial attacks presents substantial risks in security-sensitive contexts. Even imperceptible perturbations within graphs can lead to considerable performance degradation, highlighting the urgent need for robust GNN models to ensure safety and privacy in critical applications. To address this challenge, we propose training-time optimization-based attacks on GNNs, specifically targeting modifications to graph structures. Our approach revolves around utilizing meta-gradients to tackle the two-level problem inherent in training-time attacks. This involves treating the graph as a hyperparameter to optimize, followed by leveraging convex relaxation and projected momentum optimization techniques to generate the attacks. In our evaluation on node classification tasks, our attacks surpass state-of-the-art methods within the same perturbation budget, underscoring the effectiveness of our approach. Our experiments consistently demonstrate that even minor graph perturbations result in a significant performance decline for graph convolutional networks. Our attacks do not require any prior knowledge of or access to the target classifiers. This research contributes significantly to bolstering the resilience of GNNs against adversarial manipulations in real-world scenarios.
引用
收藏
页码:330 / 337
页数:8
相关论文
共 50 条
  • [1] Fortifying graph neural networks against adversarial attacks via ensemble learning
    Zhou, Chenyu
    Huang, Wei
    Miao, Xinyuan
    Peng, Yabin
    Kong, Xianglong
    Cao, Yi
    Chen, Xi
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [2] Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach
    Sun, Yiwei
    Wang, Suhang
    Tang, Xianfeng
    Hsieh, Tsung-Yu
    Honavar, Vasant
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 673 - 683
  • [3] Defending adversarial attacks in Graph Neural Networks via tensor enhancement
    Zhang, Jianfu
    Hong, Yan
    Cheng, Dawei
    Zhang, Liqing
    Zhao, Qibin
    PATTERN RECOGNITION, 2025, 158
  • [4] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [5] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250
  • [6] Exploratory Adversarial Attacks on Graph Neural Networks
    Lin, Xixun
    Zhou, Chuan
    Yang, Hong
    Wu, Jia
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1136 - 1141
  • [7] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2847 - 2856
  • [8] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [9] Adversarial attacks against dynamic graph neural networks via node injection
    Jiang, Yanan
    Xia, Hui
    HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):
  • [10] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39