Sparse Backdoor Attack Against Neural Networks

被引:0
|
作者
Zhong, Nan [1 ]
Qian, Zhenxing [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200438, Peoples R China
来源
COMPUTER JOURNAL | 2023年 / 67卷 / 05期
基金
中国国家自然科学基金;
关键词
Backdoor attack; AI security; Trustworthy AI;
D O I
10.1093/comjnl/bxad100
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies show that neural networks are vulnerable to backdoor attacks, in which compromised networks behave normally for clean inputs but make mistakes when a pre-defined trigger appears. Although prior studies have designed various invisible triggers to avoid causing visual anomalies, they cannot evade some trigger detectors. In this paper, we consider the stealthiness of backdoor attacks from input space and feature representation space. We propose a novel backdoor attack named sparse backdoor attack, and investigate the minimum required trigger to induce the well-trained networks to make incorrect results. A U-net-based generator is employed to create triggers for each clean image. Considering the stealthiness of the trigger, we restrict the elements of the trigger between -1 and 1. In the aspect of the feature representation domain, we adopt an entanglement cost function to minimize the gap between feature representations of benign and malicious inputs. The inseparability of benign and malicious feature representations contributes to the stealthiness of our attack against various model diagnosis-based defences. We validate the effectiveness and generalization of our method by conducting extensive experiments on multiple datasets and networks.
引用
收藏
页码:1783 / 1793
页数:11
相关论文
共 50 条
  • [31] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846
  • [32] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
    Qi, Xiangyu
    Xie, Tinghao
    Pan, Ruizhe
    Zhu, Jifeng
    Yang, Yong
    Bu, Kai
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13337 - 13347
  • [33] Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [34] Camouflage Backdoor Attack against Pedestrian Detection
    Wu, Yalun
    Gu, Yanfeng
    Chen, Yuanwan
    Cui, Xiaoshu
    Li, Qiong
    Xiang, Yingxiao
    Tong, Endong
    Li, Jianhua
    Han, Zhen
    Liu, Jiqiang
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [35] Backdoor Attack against Face Sketch Synthesis
    Zhang, Shengchuan
    Ye, Suhang
    ENTROPY, 2023, 25 (07)
  • [36] Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography
    Liu, Peng
    Zhang, Shuyi
    Yao, Chuanjian
    Ye, Wenzhe
    Li, Xianxian
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 68 - 74
  • [37] An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
    Guo, Wei
    Tondi, Benedetta
    Barni, Mauro
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2022, 3 : 261 - 287
  • [38] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [39] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    COMPUTERS & SECURITY, 2022, 121
  • [40] PoisonedGNN: Backdoor Attack on Graph Neural Networks-Based Hardware Security Systems
    Alrahis, Lilas
    Patnaik, Satwik
    Hanif, Muhammad Abdullah
    Shafique, Muhammad
    Sinanoglu, Ozgur
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (10) : 2822 - 2834