Sparse Backdoor Attack Against Neural Networks

被引:0
|
作者
Zhong, Nan [1 ]
Qian, Zhenxing [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200438, Peoples R China
来源
COMPUTER JOURNAL | 2023年 / 67卷 / 05期
基金
中国国家自然科学基金;
关键词
Backdoor attack; AI security; Trustworthy AI;
D O I
10.1093/comjnl/bxad100
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies show that neural networks are vulnerable to backdoor attacks, in which compromised networks behave normally for clean inputs but make mistakes when a pre-defined trigger appears. Although prior studies have designed various invisible triggers to avoid causing visual anomalies, they cannot evade some trigger detectors. In this paper, we consider the stealthiness of backdoor attacks from input space and feature representation space. We propose a novel backdoor attack named sparse backdoor attack, and investigate the minimum required trigger to induce the well-trained networks to make incorrect results. A U-net-based generator is employed to create triggers for each clean image. Considering the stealthiness of the trigger, we restrict the elements of the trigger between -1 and 1. In the aspect of the feature representation domain, we adopt an entanglement cost function to minimize the gap between feature representations of benign and malicious inputs. The inseparability of benign and malicious feature representations contributes to the stealthiness of our attack against various model diagnosis-based defences. We validate the effectiveness and generalization of our method by conducting extensive experiments on multiple datasets and networks.
引用
收藏
页码:1783 / 1793
页数:11
相关论文
共 50 条
  • [21] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [22] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    APPLIED SOFT COMPUTING, 2023, 143
  • [23] A defense method against backdoor attacks on neural networks
    Kaviani, Sara
    Shamshiri, Samaneh
    Sohn, Insoo
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [24] A General Backdoor Attack to Graph Neural Networks Based on Explanation Method
    Chen, Luyao
    Yan, Na
    Zhang, Boyang
    Wang, Zhaoyang
    Wen, Yu
    Hu, Yanfei
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 759 - 768
  • [25] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [26] DBIA: DATA-FREE BACKDOOR ATTACK AGAINST TRANSFORMER NETWORKS
    Lv, Peizhuo
    Ma, Hualong
    Zhou, Jiachen
    Liang, Ruigang
    Chen, Kai
    Zhang, Shengzhi
    Yang, Yunfei
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2819 - 2824
  • [27] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    SENSORS, 2023, 23 (10)
  • [28] Shadow backdoor attack: Multi-intensity backdoor attack against federated learning
    Ren, Qixian
    Zheng, Yu
    Yang, Chao
    Li, Yue
    Ma, Jianfeng
    COMPUTERS & SECURITY, 2024, 139
  • [29] Application of complex systems in neural networks against Backdoor attacks
    Kaviani, Sara
    Sohn, Insoo
    Liu, Huaping
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 57 - 59
  • [30] BACKDOOR ATTACK AGAINST SPEAKER VERIFICATION
    Zhai, Tongqing
    Li, Yiming
    Zhang, Ziqi
    Wu, Baoyuan
    Jiang, Yong
    Xia, Shu-Tao
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2560 - 2564