Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks

被引:0
|
作者
Xu, Jing [1 ]
Koffas, Stefanos [1 ]
Picek, Stjepan [2 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Radboud Univ Nijmegen, Nijmegen, Netherlands
来源
关键词
Backdoor attacks; graph neural networks; federated learning;
D O I
10.1145/3633206
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. Although federated learning has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This article bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attack (CBA) and distributed backdoor attack (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses cannot eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses tomitigate the novel threat of backdoor attacks in Federated GNNs.
引用
收藏
页数:29
相关论文
共 50 条
  • [31] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [32] Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
    Li, Shaofeng
    Xue, Minhui
    Zhao, Benjamin
    Zhu, Haojin
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2088 - 2105
  • [33] Detecting Backdoor Attacks via Class Difference in Deep Neural Networks
    Kwon, Hyun
    IEEE ACCESS, 2020, 8 : 191049 - 191056
  • [34] Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography
    Liu, Peng
    Zhang, Shuyi
    Yao, Chuanjian
    Ye, Wenzhe
    Li, Xianxian
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 68 - 74
  • [35] An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
    Guo, Wei
    Tondi, Benedetta
    Barni, Mauro
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2022, 3 : 261 - 287
  • [36] Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments
    Chen, Yanjiao
    Gong, Xueluan
    Wang, Qian
    Di, Xing
    Huang, Huayang
    IEEE NETWORK, 2020, 34 (05): : 141 - 147
  • [37] Toward Backdoor Attacks for Image Captioning Model in Deep Neural Networks
    Kwon, Hyun
    Lee, Sanghyun
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [38] Federated Dynamic Graph Neural Networks with Secure Aggregation for Video-based Distributed Surveillance
    Jiang, Meng
    Jung, Taeho
    Karl, Ryan
    Zhao, Tong
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [39] Inference Attacks Against Graph Neural Networks
    Zhang, Zhikun
    Chen, Min
    Backes, Michael
    Shen, Yun
    Zhang, Yang
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4543 - 4560
  • [40] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250