Generating Transferable Adversarial Examples for Speech Classification

被引:12
|
作者
Kim, Hoki [1 ]
Park, Jinseong [1 ]
Lee, Jaewook [1 ]
机构
[1] Seoul Natl Univ, Gwanakro 1, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Speech classification; Adversarial attack; Transferability;
D O I
10.1016/j.patcog.2022.109286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the success of deep neural networks, the existence of adversarial attacks has revealed the vul-nerability of neural networks in terms of security. Adversarial attacks add subtle noise to the original example, resulting in a false prediction. Although adversarial attacks have been mainly studied in the im-age domain, a recent line of research has discovered that speech classification systems are also exposed to adversarial attacks. By adding inaudible noise, an adversary can deceive speech classification systems and cause fatal issues in various applications, such as speaker identification and command recognition tasks. However, research on the transferability of audio adversarial examples is still limited. Thus, in this study, we first investigate the transferability of audio adversarial examples with different structures and conditions. Through extensive experiments, we discover that the transferability of audio adversarial ex-amples is related to their noise sensitivity. Based on the analyses, we present a new adversarial attack called noise injected attack that generates highly transferable audio adversarial examples by injecting ad-ditive noise during the gradient ascent process. Our experimental results demonstrate that the proposed method outperforms other adversarial attacks in terms of transferability.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Generating Adversarial Examples Against Remote Sensing Scene Classification via Feature Approximation
    Zhu, Rui
    Ma, Shiping
    Lian, Jiawei
    He, Linyuan
    Mei, Shaohui
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 10174 - 10187
  • [32] TextJuggler: Fooling text classification tasks by generating high-quality adversarial examples
    Peng, Hao
    Wang, Zhe
    Wei, Chao
    Zhao, Dandan
    Xu, Guangquan
    Han, Jianming
    Guo, Shixin
    Zhong, Ming
    Ji, Shouling
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [33] Transferable adversarial examples can efficiently fool topic models
    Wang, Zhen
    Zheng, Yitao
    Zhu, Hai
    Yang, Chang
    Chen, Tianyi
    COMPUTERS & SECURITY, 2022, 118
  • [34] Dynamic loss yielding more transferable targeted adversarial examples
    Zhang, Ming
    Chen, Yongkang
    Li, Hu
    Qian, Cheng
    Kuang, Xiaohui
    NEUROCOMPUTING, 2024, 590
  • [35] Feature Space Perturbations Yield More Transferable Adversarial Examples
    Inkawhich, Nathan
    Wen, Wei
    Li, Hai
    Chen, Yiran
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7059 - 7067
  • [36] Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
    Gubri, Martin
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    Sen, Koushik
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 738 - 748
  • [37] Generating adversarial examples with collaborative generative models
    Xu, Lei
    Zhai, Junhai
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (02) : 1077 - 1091
  • [38] An efficient framework for generating robust adversarial examples
    Zhang, Lili
    Wang, Xiaoping
    Lu, Kai
    Peng, Shaoliang
    Wang, Xiaodong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2020, 35 (09) : 1433 - 1449
  • [39] Generating adversarial examples with collaborative generative models
    Lei Xu
    Junhai Zhai
    International Journal of Information Security, 2024, 23 : 1077 - 1091
  • [40] Generating adversarial examples with input significance indicator
    Qiu, Xiaofeng
    Zhou, Shuya
    NEUROCOMPUTING, 2020, 394 : 1 - 12