Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

被引:14
|
作者
Zheng, Haibin [1 ]
Xiong, Haiyang [2 ]
Chen, Jinyin [3 ]
Ma, Haonan [4 ]
Huang, Guohan [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[4] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[5] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
来源
关键词
Drugs; Training; Graph neural networks; Data models; Training data; Security; Indexes; Backdoor attack; defense; graph neural networks (GNNs); interpretation; motif; CLASSIFICATION;
D O I
10.1109/TCSS.2023.3267094
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that is either the randomly generated subgraph e.g., erdos-renyi backdoor (ER-B) for less computational burden or the gradient-based generative subgraph e.g., graph trojaning attack (GTA) to enable a more effective attack. However, the interpretation of how is the trigger structure and the effect of the backdoor attack related has been overlooked in the current literature. Motifs, recurrent and statistically significant subgraphs in graphs, contain rich structure information. In this article, we are rethinking the trigger from the perspective of motifs and propose a motif-based backdoor attack, denoted as Motif-Backdoor. It contributes from three aspects. First, Interpretation, it provides an in-depth explanation for backdoor effectiveness by the validity of the trigger structure from motifs, leading to some novel insights, e.g., using subgraphs that appear less frequently in the graph as the trigger can achieve better attack performance. Second, Effectiveness, Motif-Backdoor reaches the state-of-the-art (SOTA) attack performance in both black-box and defensive scenarios. Third, Efficiency, based on the graph motif distribution, Motif-Backdoor can quickly obtain an effective trigger structure without target model feedback or subgraph model generation. Extensive experimental results show that Motif-Backdoor realizes the SOTA performance on three popular models and four public datasets compared with five baselines, e.g., Motif-Backdoor improves the attack success rate (ASR) by 14.73% compared with baselines on average. In addition, under a possible defense, Motif-Backdoor still implements satisfying performance, highlighting the requirement of defenses against backdoor attacks on GNNs. The datasets and code are available at https://github.com/Seaocn/Motif-Backdoor.
引用
收藏
页码:2479 / 2493
页数:15
相关论文
共 50 条
  • [31] Multi-target label backdoor attacks on graph neural networks
    Wang, Kaiyang
    Deng, Huaxin
    Xu, Yijia
    Liu, Zhonglin
    Fang, Yong
    PATTERN RECOGNITION, 2024, 152
  • [32] More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
    Xu, Jing
    Wang, Rui
    Koffas, Stefanos
    Liang, Kaitai
    Picek, Stjepan
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 684 - 698
  • [33] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [34] Backdoor Mitigation in Deep Neural Networks via Strategic Retraining
    Dhonthi, Akshay
    Hahn, Ernst Moritz
    Hashemi, Vahid
    FORMAL METHODS, FM 2023, 2023, 14000 : 635 - 647
  • [35] Textual Backdoor Attack via Keyword Positioning
    Chen, Depeng
    Mao, Fangfang
    Jin, Hulin
    Cui, Jie
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT X, ICIC 2024, 2024, 14871 : 55 - 66
  • [36] Conditional Backdoor Attack via JPEG Compression
    Duan, Qiuyu
    Hua, Zhongyun
    Liao, Qing
    Zhang, Yushu
    Zhang, Leo Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 19823 - 19831
  • [37] PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
    Yuan, Yizhen
    Kong, Rui
    Xie, Shenghao
    Li, Yuanchun
    Liu, Yunxin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9134 - 9142
  • [38] Scalable Backdoor Detection in Neural Networks
    Harikumar, Haripriya
    Le, Vuong
    Rana, Santu
    Bhattacharya, Sourangshu
    Gupta, Sunil
    Venkatesh, Svetha
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 289 - 304
  • [39] Reverse Backdoor Distillation: Towards Online Backdoor Attack Detection for Deep Neural Network Models
    Yao, Zeming
    Zhang, Hangtao
    Guo, Yicheng
    Tian, Xin
    Peng, Wei
    Zou, Yi
    Zhang, Leo Yu
    Chen, Chao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5098 - 5111
  • [40] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
    Qi, Xiangyu
    Xie, Tinghao
    Pan, Ruizhe
    Zhu, Jifeng
    Yang, Yong
    Bu, Kai
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13337 - 13347