Transferring Robustness for Graph Neural Network Against Poisoning Attacks

被引:95
|
作者
Tang, Xianfeng [1 ]
Li, Yandong [2 ]
Sun, Yiwei [1 ]
Yao, Huaxiu [1 ]
Mitra, Prasenjit [1 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Cent Florida, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Robust Graph Neural Networks; Adversarial Defense;
D O I
10.1145/3336191.3371851
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.
引用
收藏
页码:600 / 608
页数:9
相关论文
共 50 条
  • [21] CRAB: CERTIFIED PATCH ROBUSTNESS AGAINST POISONING-BASED BACKDOOR ATTACKS
    Ji, Huxiao
    Li, Jie
    Wu, Chentao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2486 - 2490
  • [22] Model Inversion Attacks Against Graph Neural Networks
    Zhang, Zaixi
    Liu, Qi
    Huang, Zhenya
    Wang, Hao
    Lee, Chee-Kong
    Chen, Enhong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8729 - 8741
  • [23] Explanatory subgraph attacks against Graph Neural Networks
    Wang, Huiwei
    Liu, Tianhua
    Sheng, Ziyu
    Li, Huaqing
    NEURAL NETWORKS, 2024, 172
  • [24] Modelling Data Poisoning Attacks Against Convolutional Neural Networks
    Jonnalagadda, Annapurna
    Mohanty, Debdeep
    Zakee, Ashraf
    Kamalov, Firuz
    JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2024, 23 (02)
  • [25] A comparative analysis of network robustness against different link attacks
    Duan, Boping
    Liu, Jing
    Zhou, Mingxing
    Ma, Liangliang
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2016, 448 : 144 - 153
  • [26] Robustness of the public transport network against attacks on its routes
    Cicchini, Tomas
    Caridi, Ines
    Ermann, Leonardo
    CHAOS SOLITONS & FRACTALS, 2024, 184
  • [27] GCRL: a graph neural network framework for network connectivity robustness learning
    Zhang, Yu
    Chen, Haowei
    Chen, Qiyu
    Ding, Jie
    Li, Xiang
    NEW JOURNAL OF PHYSICS, 2024, 26 (09):
  • [28] Hierarchical Adversarial Attacks Against Graph-Neural-Network-Based IoT Network Intrusion Detection System
    Zhou, Xiaokang
    Liang, Wei
    Li, Weimin
    Yan, Ke
    Shimizu, Shohei
    Wang, Kevin I-Kai
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (12) : 9310 - 9319
  • [29] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [30] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,