Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

被引:14
|
作者
Zheng, Haibin [1 ]
Xiong, Haiyang [2 ]
Chen, Jinyin [3 ]
Ma, Haonan [4 ]
Huang, Guohan [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[4] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[5] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
来源
关键词
Drugs; Training; Graph neural networks; Data models; Training data; Security; Indexes; Backdoor attack; defense; graph neural networks (GNNs); interpretation; motif; CLASSIFICATION;
D O I
10.1109/TCSS.2023.3267094
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that is either the randomly generated subgraph e.g., erdos-renyi backdoor (ER-B) for less computational burden or the gradient-based generative subgraph e.g., graph trojaning attack (GTA) to enable a more effective attack. However, the interpretation of how is the trigger structure and the effect of the backdoor attack related has been overlooked in the current literature. Motifs, recurrent and statistically significant subgraphs in graphs, contain rich structure information. In this article, we are rethinking the trigger from the perspective of motifs and propose a motif-based backdoor attack, denoted as Motif-Backdoor. It contributes from three aspects. First, Interpretation, it provides an in-depth explanation for backdoor effectiveness by the validity of the trigger structure from motifs, leading to some novel insights, e.g., using subgraphs that appear less frequently in the graph as the trigger can achieve better attack performance. Second, Effectiveness, Motif-Backdoor reaches the state-of-the-art (SOTA) attack performance in both black-box and defensive scenarios. Third, Efficiency, based on the graph motif distribution, Motif-Backdoor can quickly obtain an effective trigger structure without target model feedback or subgraph model generation. Extensive experimental results show that Motif-Backdoor realizes the SOTA performance on three popular models and four public datasets compared with five baselines, e.g., Motif-Backdoor improves the attack success rate (ASR) by 14.73% compared with baselines on average. In addition, under a possible defense, Motif-Backdoor still implements satisfying performance, highlighting the requirement of defenses against backdoor attacks on GNNs. The datasets and code are available at https://github.com/Seaocn/Motif-Backdoor.
引用
收藏
页码:2479 / 2493
页数:15
相关论文
共 50 条
  • [1] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [2] A General Backdoor Attack to Graph Neural Networks Based on Explanation Method
    Chen, Luyao
    Yan, Na
    Zhang, Boyang
    Wang, Zhaoyang
    Wen, Yu
    Hu, Yanfei
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 759 - 768
  • [3] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [4] Rethinking the Trigger-injecting Position in Graph Backdoor Attack
    Xu, Jing
    Abad, Gorka
    Picek, Stjepan
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [6] Transferable Graph Backdoor Attack
    Yang, Shuiqiao
    Doan, Bao Gia
    Montague, Paul
    De Vel, Olivier
    Abraham, Tamas
    Camtepe, Seyit
    Ranasinghe, Damith C.
    Kanhere, Salil S.
    PROCEEDINGS OF 25TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2022, 2022, : 321 - 332
  • [7] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    COMPUTER JOURNAL, 2023, 67 (05): : 1783 - 1793
  • [8] A semantic backdoor attack against graph convolutional networks
    Dai, Jiazhu
    Xiong, Zhipeng
    Cao, Chenhong
    NEUROCOMPUTING, 2024, 600
  • [9] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [10] Patch Based Backdoor Attack on Deep Neural Networks
    Manna, Debasmita
    Tripathy, Somanath
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 422 - 440