TOPOLOGICAL ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA PROJECTED META LEARNING

被引:1
|
作者
Aburidi, Mohammed [1 ]
Marcia, Roummel [1 ]
机构
[1] Univ Calif Merced, Dept Appl Math, Merced, CA 95343 USA
关键词
Graph Neural Networks; Meta-Learning; Adversarial Attack; Adversarial Training; Defense; Focal Loss;
D O I
10.1109/EAIS58494.2024.10569101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have demonstrated significant success across diverse domains like social and biological networks. However, their susceptibility to adversarial attacks presents substantial risks in security-sensitive contexts. Even imperceptible perturbations within graphs can lead to considerable performance degradation, highlighting the urgent need for robust GNN models to ensure safety and privacy in critical applications. To address this challenge, we propose training-time optimization-based attacks on GNNs, specifically targeting modifications to graph structures. Our approach revolves around utilizing meta-gradients to tackle the two-level problem inherent in training-time attacks. This involves treating the graph as a hyperparameter to optimize, followed by leveraging convex relaxation and projected momentum optimization techniques to generate the attacks. In our evaluation on node classification tasks, our attacks surpass state-of-the-art methods within the same perturbation budget, underscoring the effectiveness of our approach. Our experiments consistently demonstrate that even minor graph perturbations result in a significant performance decline for graph convolutional networks. Our attacks do not require any prior knowledge of or access to the target classifiers. This research contributes significantly to bolstering the resilience of GNNs against adversarial manipulations in real-world scenarios.
引用
收藏
页码:330 / 337
页数:8
相关论文
共 50 条
  • [21] Towards Query-limited Adversarial Attacks on Graph Neural Networks
    Li, Haoran
    Zhang, Jinhong
    Gao, Song
    Wu, Liwen
    Zhou, Wei
    Wang, Ruxin
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 516 - 521
  • [22] Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training
    Wu, Xu-Gang
    Wu, Hui-Jun
    Zhou, Xu
    Zhao, Xiang
    Lu, Kai
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2022, 37 (05) : 1161 - 1175
  • [23] Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training
    Xu-Gang Wu
    Hui-Jun Wu
    Xu Zhou
    Xiang Zhao
    Kai Lu
    Journal of Computer Science and Technology, 2022, 37 : 1161 - 1175
  • [24] Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
    Zhao, Xin
    Zhang, Zeru
    Zhang, Zijie
    Wu, Lingfei
    Jin, Jiayin
    Zhou, Yang
    Jin, Ruoming
    Dou, Dejing
    Yan, Da
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [25] Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
    Takahashi, Tsubasa
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1395 - 1400
  • [26] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [27] Subgraph Learning for Topological Geolocalization with Graph Neural Networks
    Zha, Bing
    Yilmaz, Alper
    SENSORS, 2023, 23 (11)
  • [28] SAM: Query-efficient Adversarial Attacks against Graph Neural Networks
    Zhang, Chenhan
    Zhang, Shiyao
    Yu, James J. Q.
    Yu, Shui
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [29] Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks
    Kwon, Hyun
    Baek, Jang-Woon
    IEEE ACCESS, 2025, 13 : 13700 - 13710
  • [30] NetFense: Adversarial Defenses Against Privacy Attacks on Neural Networks for Graph Data
    Hsieh, I-Chung
    Li, Cheng-Te
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 796 - 809