Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

被引:14
|
作者
Zheng, Haibin [1 ]
Xiong, Haiyang [2 ]
Chen, Jinyin [3 ]
Ma, Haonan [4 ]
Huang, Guohan [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[4] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[5] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
来源
关键词
Drugs; Training; Graph neural networks; Data models; Training data; Security; Indexes; Backdoor attack; defense; graph neural networks (GNNs); interpretation; motif; CLASSIFICATION;
D O I
10.1109/TCSS.2023.3267094
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that is either the randomly generated subgraph e.g., erdos-renyi backdoor (ER-B) for less computational burden or the gradient-based generative subgraph e.g., graph trojaning attack (GTA) to enable a more effective attack. However, the interpretation of how is the trigger structure and the effect of the backdoor attack related has been overlooked in the current literature. Motifs, recurrent and statistically significant subgraphs in graphs, contain rich structure information. In this article, we are rethinking the trigger from the perspective of motifs and propose a motif-based backdoor attack, denoted as Motif-Backdoor. It contributes from three aspects. First, Interpretation, it provides an in-depth explanation for backdoor effectiveness by the validity of the trigger structure from motifs, leading to some novel insights, e.g., using subgraphs that appear less frequently in the graph as the trigger can achieve better attack performance. Second, Effectiveness, Motif-Backdoor reaches the state-of-the-art (SOTA) attack performance in both black-box and defensive scenarios. Third, Efficiency, based on the graph motif distribution, Motif-Backdoor can quickly obtain an effective trigger structure without target model feedback or subgraph model generation. Extensive experimental results show that Motif-Backdoor realizes the SOTA performance on three popular models and four public datasets compared with five baselines, e.g., Motif-Backdoor improves the attack success rate (ASR) by 14.73% compared with baselines on average. In addition, under a possible defense, Motif-Backdoor still implements satisfying performance, highlighting the requirement of defenses against backdoor attacks on GNNs. The datasets and code are available at https://github.com/Seaocn/Motif-Backdoor.
引用
收藏
页码:2479 / 2493
页数:15
相关论文
共 50 条
  • [41] Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [42] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    SENSORS, 2023, 23 (10)
  • [43] Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
    Li, Shaofeng
    Xue, Minhui
    Zhao, Benjamin
    Zhu, Haojin
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2088 - 2105
  • [44] Detecting Backdoor Attacks via Class Difference in Deep Neural Networks
    Kwon, Hyun
    IEEE ACCESS, 2020, 8 : 191049 - 191056
  • [45] Detecting backdoor in deep neural networks via intentional adversarial perturbations
    Xue, Mingfu
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    INFORMATION SCIENCES, 2023, 634 : 564 - 577
  • [46] Backdoor attack detection via prediction trustworthiness assessment
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    INFORMATION SCIENCES, 2024, 662
  • [47] A Survey of Backdoor Attacks and Defenses on Neural Networks
    Wang, Xu-Tong
    Yin, Jie
    Liu, Chao-Ge
    Xu, Chen-Chen
    Huang, Hao
    Wang, Zhi
    Zhang, Fang-Jiao
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (08): : 1713 - 1743
  • [48] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [49] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192
  • [50] Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks
    Xu, Jing
    Koffas, Stefanos
    Picek, Stjepan
    DIGITAL THREATS: RESEARCH AND PRACTICE, 2024, 5 (02):