Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography

被引:3
|
作者
Liu, Peng [1 ]
Zhang, Shuyi [1 ]
Yao, Chuanjian [1 ]
Ye, Wenzhe [1 ]
Li, Xianxian [1 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR56361.2022.9956521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the world of cyber security, backdoor attacks are widely used. These attacks work by injecting a hidden backdoor into training samples to mislead models into making incorrect judgments for achieving the effect of the attack. However, since the triggers in backdoor attacks are relatively single, defenders can easily detect backdoor triggers of different corrupted samples based on the same behavior. In addition, most current work considers image classification as the object of backdoor attacks, and there is almost no related research on speaker verification. This paper proposes a novel audio steganography-based personalized trigger backdoor attack that embeds hidden trigger techniques into deep neural networks. Specifically, the backdoor speaker verification uses a pre-trained audio steganography network that employs specific triggers for different samples to implicitly write personalized information to all corrupted samples. This personalized method can significantly improve the concealment of the attack and the success rate of the attack. In addition, only the frequency and pitch were modified and the structure of the attacked model was left unaltered, making the attack behavior stealthy. The proposed method provides a new attack direction for speaker verification. Through extensive experiments, we verified the effectiveness of the proposed method.
引用
收藏
页码:68 / 74
页数:7
相关论文
共 50 条
  • [11] Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
    Xue, Mingfu
    He, Can
    Sun, Shichang
    Wang, Jian
    Liu, Weiqiang
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 620 - 626
  • [12] Interpretability-Guided Defense Against Backdoor Attacks to Deep Neural Networks
    Jiang, Wei
    Wen, Xiangyu
    Zhan, Jinyu
    Wang, Xupeng
    Song, Ziwei
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (08) : 2611 - 2624
  • [13] A defense method against backdoor attacks on neural networks
    Kaviani, Sara
    Shamshiri, Samaneh
    Sohn, Insoo
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [14] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [15] PTB: Robust physical backdoor attacks against deep neural networks in real world
    Xue, Mingfu
    He, Can
    Wu, Yinghao
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    COMPUTERS & SECURITY, 2022, 118
  • [16] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100
  • [17] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    Zhang, Quanxin
    Ma, Wencong
    Wang, Yajie
    Zhang, Yaoyuan
    Shi, Zhiwei
    Li, Yuanzhang
    CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (02) : 199 - 212
  • [18] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [19] Application of complex systems in neural networks against Backdoor attacks
    Kaviani, Sara
    Sohn, Insoo
    Liu, Huaping
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 57 - 59
  • [20] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212