Transferring Robustness for Graph Neural Network Against Poisoning Attacks

被引:95
|
作者
Tang, Xianfeng [1 ]
Li, Yandong [2 ]
Sun, Yiwei [1 ]
Yao, Huaxiu [1 ]
Mitra, Prasenjit [1 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Cent Florida, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Robust Graph Neural Networks; Adversarial Defense;
D O I
10.1145/3336191.3371851
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.
引用
收藏
页码:600 / 608
页数:9
相关论文
共 50 条
  • [1] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [2] Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
    Shanthamallu, Uday Shankar
    Thiagarajan, Jayaraman J.
    Spanias, Andreas
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9524 - 9532
  • [3] A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks
    Xiao, Yang
    Li, Jie
    Su, Wengui
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 : 55 - 72
  • [4] Membership Inference Attacks Against Robust Graph Neural Network
    Liu, Zhengyang
    Zhang, Xiaoyu
    Chen, Chenyang
    Lin, Shen
    Li, Jingjin
    CYBERSPACE SAFETY AND SECURITY, CSS 2022, 2022, 13547 : 259 - 273
  • [5] Chaotic neural network quantization and its robustness against adversarial attacks
    Osama, Alaa
    Gadallah, Samar I.
    Said, Lobna A.
    Radwan, Ahmed G.
    Fouda, Mohammed E.
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [6] Node Copying for Protection Against Graph Neural Network Topology Attacks
    Regol, Florence
    Pal, Soumyasundar
    Coates, Mark
    2019 IEEE 8TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP 2019), 2019, : 709 - 713
  • [7] Robustness of Random Walk on a Graph against Adversary Attacks
    Kawamura, Hiroki
    Shiina, Satoshi
    Aung, Han Nay
    Ohsaki, Hiroyuki
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 1080 - 1088
  • [8] Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7961 - 7969
  • [9] Enhancing network robustness against malicious attacks
    Zeng, An
    Liu, Weiping
    PHYSICAL REVIEW E, 2012, 85 (06):
  • [10] From Decoupling to Reconstruction: A Robust Graph Neural Network Against Topology Attacks
    Wei, Xiaodong
    Li, Yong
    Qin, Xiaowei
    Xu, Xiaodong
    Li, Ximin
    Liu, Mengjie
    2020 12TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2020, : 263 - 268