Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

被引:14
|
作者
Zheng, Haibin [1 ]
Xiong, Haiyang [2 ]
Chen, Jinyin [3 ]
Ma, Haonan [4 ]
Huang, Guohan [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou, Peoples R China
[4] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[5] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
来源
关键词
Drugs; Training; Graph neural networks; Data models; Training data; Security; Indexes; Backdoor attack; defense; graph neural networks (GNNs); interpretation; motif; CLASSIFICATION;
D O I
10.1109/TCSS.2023.3267094
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that is either the randomly generated subgraph e.g., erdos-renyi backdoor (ER-B) for less computational burden or the gradient-based generative subgraph e.g., graph trojaning attack (GTA) to enable a more effective attack. However, the interpretation of how is the trigger structure and the effect of the backdoor attack related has been overlooked in the current literature. Motifs, recurrent and statistically significant subgraphs in graphs, contain rich structure information. In this article, we are rethinking the trigger from the perspective of motifs and propose a motif-based backdoor attack, denoted as Motif-Backdoor. It contributes from three aspects. First, Interpretation, it provides an in-depth explanation for backdoor effectiveness by the validity of the trigger structure from motifs, leading to some novel insights, e.g., using subgraphs that appear less frequently in the graph as the trigger can achieve better attack performance. Second, Effectiveness, Motif-Backdoor reaches the state-of-the-art (SOTA) attack performance in both black-box and defensive scenarios. Third, Efficiency, based on the graph motif distribution, Motif-Backdoor can quickly obtain an effective trigger structure without target model feedback or subgraph model generation. Extensive experimental results show that Motif-Backdoor realizes the SOTA performance on three popular models and four public datasets compared with five baselines, e.g., Motif-Backdoor improves the attack success rate (ASR) by 14.73% compared with baselines on average. In addition, under a possible defense, Motif-Backdoor still implements satisfying performance, highlighting the requirement of defenses against backdoor attacks on GNNs. The datasets and code are available at https://github.com/Seaocn/Motif-Backdoor.
引用
收藏
页码:2479 / 2493
页数:15
相关论文
共 50 条
  • [21] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    COMPUTERS & SECURITY, 2022, 120
  • [22] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    Computers and Security, 2022, 120
  • [23] Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
    Zheng, Haibin
    Xiong, Haiyang
    Ma, Haonan
    Huang, Guohan
    Chen, Jinyin
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 1816 - 1831
  • [24] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    Applied Intelligence, 2023, 53 : 20402 - 20417
  • [25] Stealthy dynamic backdoor attack against neural networks for image classification
    Dong, Liang
    Qiu, Jiawei
    Fu, Zhongwang
    Chen, Leiyang
    Cui, Xiaohui
    Shen, Zhidong
    APPLIED SOFT COMPUTING, 2023, 149
  • [26] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    COMPUTERS & SECURITY, 2024, 136
  • [27] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [28] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [29] Compression-resistant backdoor attack against deep neural networks
    Xue, Mingfu
    Wang, Xin
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20402 - 20417
  • [30] Key Substructure-Driven Backdoor Attacks on Graph Neural Networks
    Tong, Haibin
    Ma, Huifang
    Shen, Hui
    Li, Zhixin
    Chang, Liang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V, 2024, 15020 : 159 - 174