WaveGuard: Understanding and Mitigating Audio Adversarial Examples

被引:0
|
作者
Hussain, Shehzeen [1 ]
Neekhara, Paarth [1 ]
Dubnov, Shlomo [1 ]
McAuley, Julian [1 ]
Koushanfar, Farinaz [1 ]
机构
[1] Univ Calif San Diego, San Diego, CA 92103 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There has been a recent surge in adversarial attacks on deep learning based automatic speech recognition (ASR) systems. These attacks pose new challenges to deep learning security and have raised significant concerns in deploying ASR systems in safety-critical applications. In this work, we introduce WaveGuard: a framework for detecting adversarial inputs that are crafted to attack ASR systems. Our framework incorporates audio transformation functions and analyses the ASR transcriptions of the original and transformed audio to detect adversarial inputs.1 We demonstrate that our defense framework is able to reliably detect adversarial examples constructed by four recent audio adversarial attacks, with a variety of audio transformation functions. With careful regard for best practices in defense evaluations, we analyze our proposed defense and its strength to withstand adaptive and robust attacks in the audio domain. We empirically demonstrate that audio transformations that recover audio from perceptually informed representations can lead to a strong defense that is robust against an adaptive adversary even in a complete whitebox setting. Furthermore, WaveGuard can be used out-of-the box and integrated directly with any ASR model to efficiently detect audio adversarial examples, without the need for model retraining.
引用
收藏
页码:2273 / 2290
页数:18
相关论文
共 50 条
  • [1] Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
    Chen, Guangke
    Zhao, Zhe
    Song, Fu
    Chen, Sen
    Fan, Lingling
    Wang, Feng
    Wang, Jiashui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 3970 - 3987
  • [2] DOMPTEUR: Taming Audio Adversarial Examples
    Eisenhofer, Thorsten
    Schoenherr, Lea
    Frank, Joel
    Speckemeier, Lars
    Kolossa, Dorothea
    Holz, Thorsten
    PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, 2021, : 2309 - 2326
  • [3] Detecting Audio Adversarial Examples with Logit Noising
    Park, Namgyu
    Ji, Sangwoo
    Kim, Jong
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 586 - 595
  • [4] A Unified Framework for Detecting Audio Adversarial Examples
    Du, Xia
    Pun, Chi-Man
    Zhang, Zheng
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3986 - 3994
  • [5] Fast and Accurate Detection of Audio Adversarial Examples
    Huang, Po-Hao
    Lan, Yung-Yuan
    Harriman, Wilbert
    Chiuwanara, Venesia
    Wang, Ting-Chi
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [6] Improving the Security of Audio CAPTCHAs With Adversarial Examples
    Wang, Ping
    Gao, Haichang
    Guo, Xiaoyan
    Yuan, Zhongni
    Nian, Jiawei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 650 - 667
  • [7] Understanding and Benchmarking the Commonality of Adversarial Examples
    He, Ruiwen
    Cheng, Yushi
    Ze, Junning
    Ji, Xiaoyu
    Xu, Wenyuan
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 1665 - 1683
  • [8] Comparing Unsupervised Detection Algorithms for Audio Adversarial Examples
    Choosaksakunwiboon, Shanatip
    Pizzi, Karla
    Kao, Ching-Yu
    SPEECH AND COMPUTER, SPECOM 2022, 2022, 13721 : 114 - 127
  • [9] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [10] Generating Audio Adversarial Examples with Ensemble Substituted Models
    Zhang, Yun
    Li, Hongwei
    Xu, Guowen
    Luo, Xizhao
    Dong, Guishan
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,