WaveGuard: Understanding and Mitigating Audio Adversarial Examples

被引:0
|
作者
Hussain, Shehzeen [1 ]
Neekhara, Paarth [1 ]
Dubnov, Shlomo [1 ]
McAuley, Julian [1 ]
Koushanfar, Farinaz [1 ]
机构
[1] Univ Calif San Diego, San Diego, CA 92103 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There has been a recent surge in adversarial attacks on deep learning based automatic speech recognition (ASR) systems. These attacks pose new challenges to deep learning security and have raised significant concerns in deploying ASR systems in safety-critical applications. In this work, we introduce WaveGuard: a framework for detecting adversarial inputs that are crafted to attack ASR systems. Our framework incorporates audio transformation functions and analyses the ASR transcriptions of the original and transformed audio to detect adversarial inputs.1 We demonstrate that our defense framework is able to reliably detect adversarial examples constructed by four recent audio adversarial attacks, with a variety of audio transformation functions. With careful regard for best practices in defense evaluations, we analyze our proposed defense and its strength to withstand adaptive and robust attacks in the audio domain. We empirically demonstrate that audio transformations that recover audio from perceptually informed representations can lead to a strong defense that is robust against an adaptive adversary even in a complete whitebox setting. Furthermore, WaveGuard can be used out-of-the box and integrated directly with any ASR model to efficiently detect audio adversarial examples, without the need for model retraining.
引用
收藏
页码:2273 / 2290
页数:18
相关论文
共 50 条
  • [21] MITIGATING ADVERSARIAL ATTACKS ON MEDICAL IMAGE UNDERSTANDING SYSTEMS
    Paul, Rahul
    Schabath, Matthew
    Gillies, Robert
    Hall, Lawrence
    Goldgof, Dmitry
    2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1517 - 1521
  • [22] Mitigating Overfitting Using Regularization to Defend Networks Against Adversarial Examples
    Kubo, Yoshimasa
    Trappenberg, Thomas
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 400 - 405
  • [23] Towards Visualizing and Detecting Audio Adversarial Examples for Automatic Speech Recognition
    Zong, Wei
    Chow, Yang-Wai
    Susilo, Willy
    INFORMATION SECURITY AND PRIVACY, ACISP 2021, 2021, 13083 : 531 - 549
  • [24] Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples
    Fukuda, Yuta
    Yoshida, Kota
    Fujino, Takeshi
    IEEE ACCESS, 2023, 11 : 87200 - 87209
  • [25] Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
    Rajaratnam, Krishan
    Kalita, Jugal
    2018 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2018, : 197 - 201
  • [26] Constructing adversarial examples to investigate the plausibility of explanations in deep audio and image classifiers
    Katharina Hoedt
    Verena Praher
    Arthur Flexer
    Gerhard Widmer
    Neural Computing and Applications, 2023, 35 : 10011 - 10029
  • [27] MultiPAD: A Multivariant Partition-Based Method for Audio Adversarial Examples Detection
    Guo, Qingli
    Ye, Jing
    Hu, Yu
    Zhang, Guohe
    Li, Xiaowei
    Li, Huawei
    IEEE ACCESS, 2020, 8 (08): : 63368 - 63380
  • [28] Constructing adversarial examples to investigate the plausibility of explanations in deep audio and image classifiers
    Hoedt, Katharina
    Praher, Verena
    Flexer, Arthur
    Widmer, Gerhard
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10011 - 10029
  • [29] AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples
    Kwon, Hyun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) : 57943 - 57962
  • [30] Understanding adversarial examples requires a theory of artefacts for deep learning
    Buckner, Cameron
    NATURE MACHINE INTELLIGENCE, 2020, 2 (12) : 731 - 736