Transferring Robustness for Graph Neural Network Against Poisoning Attacks

被引:95
|
作者
Tang, Xianfeng [1 ]
Li, Yandong [2 ]
Sun, Yiwei [1 ]
Yao, Huaxiu [1 ]
Mitra, Prasenjit [1 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Cent Florida, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Robust Graph Neural Networks; Adversarial Defense;
D O I
10.1145/3336191.3371851
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.
引用
收藏
页码:600 / 608
页数:9
相关论文
共 50 条
  • [41] Towards Class-Oriented Poisoning Attacks Against Neural Networks
    Zhao, Bingyin
    Lao, Yingjie
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2244 - 2253
  • [42] Targeted Data Poisoning Attacks Against Continual Learning Neural Networks
    Li, Huayu
    Ditzler, Gregory
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [43] Cure-GNN: A Robust Curvature-Enhanced Graph Neural Network Against Adversarial Attacks
    Xiao, Yang
    Xing, Zhuolin
    Liu, Alex X.
    Bai, Lei
    Pei, Qingqi
    Yao, Lina
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 4214 - 4229
  • [44] Enhancing the Robustness and Security Against Various Attacks in a Scale: Free Network
    G. Keerthana
    P. Anandan
    N. Nandhagopal
    Wireless Personal Communications, 2021, 117 : 3029 - 3050
  • [45] Enhancing the Robustness and Security Against Various Attacks in a Scale: Free Network
    Keerthana, G.
    Anandan, P.
    Nandhagopal, N.
    WIRELESS PERSONAL COMMUNICATIONS, 2021, 117 (04) : 3029 - 3050
  • [46] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [47] Defense Against Software-Defined Network Topology Poisoning Attacks
    Gao, Yang
    Xu, Mingdi
    TSINGHUA SCIENCE AND TECHNOLOGY, 2023, 28 (01): : 39 - 46
  • [48] Resilience of Pruned Neural Network Against Poisoning Attack
    Zhao, Bingyin
    Lao, Yingjie
    PROCEEDINGS OF THE 2018 13TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE 2018), 2018, : 78 - 83
  • [49] FedKC: Personalized Federated Learning With Robustness Against Model Poisoning Attacks in the Metaverse for Consumer Health
    Sun, Le
    Tian, Jing
    Muhammad, Ghulam
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (03) : 5644 - 5653
  • [50] Data Poisoning Attacks against Autoencoder-based Anomaly Detection Models: a Robustness Analysis
    Bovenzi, Giampaolo
    Foggia, Alessio
    Santella, Salvatore
    Testa, Alessandro
    Persico, Valerio
    Pescape, Antonio
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5427 - 5432