Transferring Robustness for Graph Neural Network Against Poisoning Attacks

被引:95
|
作者
Tang, Xianfeng [1 ]
Li, Yandong [2 ]
Sun, Yiwei [1 ]
Yao, Huaxiu [1 ]
Mitra, Prasenjit [1 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Cent Florida, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Robust Graph Neural Networks; Adversarial Defense;
D O I
10.1145/3336191.3371851
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.
引用
收藏
页码:600 / 608
页数:9
相关论文
共 50 条
  • [31] On the Robustness of Neural-Enhanced Video Streaming against Adversarial Attacks
    Zhou, Qihua
    Guo, Jingcai
    Guo, Song
    Li, Ruibin
    Zhang, Jie
    Wang, Bingjie
    Xu, Zhenda
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 17123 - 17131
  • [32] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, José
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    arXiv, 2023,
  • [33] Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
    Xie, Chulin
    Long, Yunhui
    Chen, Pin-Yu
    Li, Qinbin
    Koyejo, Sanmi
    Li, Bo
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1511 - 1525
  • [34] The robustness of popular multiclass machine learning models against poisoning attacks: Lessons and insights
    Maabreh, Majdi
    Maabreh, Arwa
    Qolomany, Basheer
    Al-Fuqaha, Ala
    INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2022, 18 (07)
  • [35] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [36] Neural Network Spectral Robustness under Perturbations of the Underlying Graph
    Radulescu, Anca
    NEURAL COMPUTATION, 2016, 28 (01) : 1 - 44
  • [37] Model Stealing Attacks Against Inductive Graph Neural Networks
    Shen, Yun
    He, Xinlei
    Han, Yufei
    Zhang, Yang
    43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 1175 - 1192
  • [38] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [39] Graph augmentation against structural poisoning attacks via structure and attribute reconciliation
    Dai, Yumeng
    Shao, Yifan
    Wang, Chenxu
    Guan, Xiaohong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [40] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33