Transferable Adversarial Attacks Against ASR

被引:0
|
作者
Gao, Xiaoxue [1 ]
Li, Zexin [2 ]
Chen, Yiming [3 ]
Liu, Cong [2 ]
Li, Haizhou [4 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore 138632, Singapore
[2] Univ Calif Riverside, Riverside, CA 92521 USA
[3] Natl Univ Singapore, Singapore 117583, Singapore
[4] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial attacks; speech recognition; SPEECH RECOGNITION;
D O I
10.1109/LSP.2024.3443711
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Given the extensive research and real-world applications of automatic speech recognition (ASR), ensuring the robustness of ASR models against minor input perturbations becomes a crucial consideration for maintaining their effectiveness in real-time scenarios. Previous explorations into ASR model robustness have predominantly revolved around evaluating accuracy on white-box settings with full access to ASR models. Nevertheless, full ASR model details are often not available in real-world applications. Therefore, evaluating the robustness of black-box ASR models is essential for a comprehensive understanding of ASR model resilience. In this regard, we thoroughly study the vulnerability of practical black-box attacks in cutting-edge ASR models and propose to employ two advanced time-domain-based transferable attacks alongside our differentiable feature extractor. We also propose a speech-aware gradient optimization approach (SAGO) for ASR, which forces mistranscription with minimal impact on human imperceptibility through voice activity detection rule and a speech-aware gradient-oriented optimizer. Our comprehensive experimental results reveal performance enhancements compared to baseline approaches across five models on two databases.
引用
收藏
页码:2200 / 2204
页数:5
相关论文
共 50 条
  • [41] Bringing robustness against adversarial attacks
    Pereira, Gean T.
    de Carvalho, Andre C. P. L. F.
    NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 499 - 500
  • [42] Resilience of GANs against Adversarial Attacks
    Rudayskyy, Kyrylo
    Miri, Ali
    SECRYPT : PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, 2022, : 390 - 397
  • [43] Adversarial mRMR against Evasion Attacks
    Wu, Miaomiao
    Li, Yun
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [44] WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS
    Zhao, Qingye
    Chen, Xin
    Zhao, Zhuoyu
    Tang, Enyi
    Li, Xuandong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2734 - 2738
  • [45] Fighting Attacks on Large Character Set CAPTCHAs Using Transferable Adversarial Examples
    Fu, Yucheng
    Sun, Guoheng
    Yang, Han
    Huang, Juntian
    Wang, Haizhou
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [46] Adaptive Cross-Modal Transferable Adversarial Attacks From Images to Videos
    Wei, Zhipeng
    Chen, Jingjing
    Wu, Zuxuan
    Jiang, Yu-Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3772 - 3783
  • [47] TF-Attack: Transferable and fast adversarial attacks on large language models
    Li, Zelin
    Chen, Kehai
    Liu, Lemao
    Bai, Xuefeng
    Yang, Mingming
    Xiang, Yang
    Zhang, Min
    KNOWLEDGE-BASED SYSTEMS, 2025, 312
  • [48] Maxwell’s Demon in MLP-Mixer: towards transferable adversarial attacks
    Haoran Lyu
    Yajie Wang
    Yu-an Tan
    Huipeng Zhou
    Yuhang Zhao
    Quanxin Zhang
    Cybersecurity, 7
  • [49] Maxwell's Demon in MLP-Mixer: towards transferable adversarial attacks
    Lyu, Haoran
    Wang, Yajie
    Tan, Yu-an
    Zhou, Huipeng
    Zhao, Yuhang
    Zhang, Quanxin
    CYBERSECURITY, 2024, 7 (01)
  • [50] Multi-convolution and Adaptive-Stride Based Transferable Adversarial Attacks
    Wang, Shuo
    Huang, Qingfu
    Lian, Zhichao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 130 - 142