Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography

被引:3
|
作者
Liu, Peng [1 ]
Zhang, Shuyi [1 ]
Yao, Chuanjian [1 ]
Ye, Wenzhe [1 ]
Li, Xianxian [1 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR56361.2022.9956521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the world of cyber security, backdoor attacks are widely used. These attacks work by injecting a hidden backdoor into training samples to mislead models into making incorrect judgments for achieving the effect of the attack. However, since the triggers in backdoor attacks are relatively single, defenders can easily detect backdoor triggers of different corrupted samples based on the same behavior. In addition, most current work considers image classification as the object of backdoor attacks, and there is almost no related research on speaker verification. This paper proposes a novel audio steganography-based personalized trigger backdoor attack that embeds hidden trigger techniques into deep neural networks. Specifically, the backdoor speaker verification uses a pre-trained audio steganography network that employs specific triggers for different samples to implicitly write personalized information to all corrupted samples. This personalized method can significantly improve the concealment of the attack and the success rate of the attack. In addition, only the frequency and pitch were modified and the structure of the attacked model was left unaltered, making the attack behavior stealthy. The proposed method provides a new attack direction for speaker verification. Through extensive experiments, we verified the effectiveness of the proposed method.
引用
收藏
页码:68 / 74
页数:7
相关论文
共 50 条
  • [41] Combining Defences Against Data-Poisoning Based Backdoor Attacks on Neural Networks
    Milakovic, Andrea
    Mayer, Rudolf
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVI, DBSEC 2022, 2022, 13383 : 28 - 47
  • [42] Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images
    Matsuo, Yuki
    Takemoto, Kazuhiro
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [43] Watermarking Graph Neural Networks based on Backdoor Attacks
    Xu, Jing
    Koffas, Stefanos
    Ersoy, Oguzhan
    Picek, Stjepan
    2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 1179 - 1197
  • [44] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [45] On the Use of Deep Recurrent Neural Networks for Detecting Audio Spoofing Attacks
    Scardapane, Simone
    Stoffl, Lucas
    Roehrbein, Florian
    Uncini, Aurelio
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3483 - 3490
  • [46] PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
    Yuan, Yizhen
    Kong, Rui
    Xie, Shenghao
    Li, Yuanchun
    Liu, Yunxin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9134 - 9142
  • [47] Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing
    Xiang, Zhen
    Miller, J. David
    Kesidis, George
    COMPUTERS & SECURITY, 2021, 106
  • [48] Securing AI Models Against Backdoor Attacks: A Novel Approach Using Image Steganography
    Ahmadi, Candra
    Chen, Jiann-Liang
    Lin, Yu -Ting
    JOURNAL OF INTERNET TECHNOLOGY, 2024, 25 (03): : 465 - 475
  • [49] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    COMPUTER JOURNAL, 2023, 67 (05): : 1783 - 1793
  • [50] Backdoor Attacks Against Deep Learning Systems in the Physical World
    Wenger, Emily
    Passananti, Josephine
    Bhagoji, Arjun Nitin
    Yao, Yuanshun
    Zheng, Haitao
    Zhao, Ben Y.
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6202 - 6211