Defending against FakeBob Adversarial Attacks in Speaker Verification Systems with Noise-Adding

被引:5
|
作者
Chen, Zesheng [1 ]
Chang, Li-Chi [1 ]
Chen, Chao [1 ]
Wang, Guoping [1 ]
Bi, Zhuming [1 ]
机构
[1] Purdue Univ Ft Wayne, Coll Engn Technol & Comp Sci, Ft Wayne, IN 46805 USA
关键词
speaker verification; FakeBob adversarial attacks; defense system; denoising; noiseadding; adaptive attacks; RECOGNITION; DEFENSES;
D O I
10.3390/a15080293
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. The goal of this work is to design and implement a simple and light-weight defense system that is effective against FakeBob. We specifically study two opposite pre-processing operations on input audios in speak verification systems: denoising that attempts to remove or reduce perturbations and noise-adding that adds small noise to an input audio. Through experiments, we demonstrate that both methods are able to weaken the ability of FakeBob attacks significantly, with noise-adding achieving even better performance than denoising. Specifically, with denoising, the targeted attack success rate of FakeBob attacks can be reduced from 100% to 56.05% in GMM speaker verification systems, and from 95% to only 38.63% in i-vector speaker verification systems, respectively. With noise adding, those numbers can be further lowered down to 5.20% and 0.50%, respectively. As a proactive measure, we study several possible adaptive FakeBob attacks against the noise-adding method. Experiment results demonstrate that noise-adding can still provide a considerable level of protection against these countermeasures.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Speaker Verification Systems
    Chang, Li-Chi
    Chen, Zesheng
    Chen, Chao
    Wang, Guoping
    Bi, Zhuming
    2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [2] VeriFace: Defending against Adversarial Attacks in Face Verification Systems
    Sayed, Awny
    Kinlany, Sohair
    Zaki, Alaa
    Mahfouz, Ahmed
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 3151 - 3166
  • [3] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [4] Ensemble Adversarial Defenses and Attacks in Speaker Verification Systems
    Chen, Zesheng
    Li, Jack
    Chen, Chao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (20): : 32645 - 32655
  • [5] On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems
    Chen, Zesheng
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (18) : 16271 - 16283
  • [6] Quasi-Newton Adversarial Attacks on Speaker Verification Systems
    Goto, Keita
    Inoue, Nakamasa
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 527 - 531
  • [7] Practical Adversarial Attacks Against Speaker Recognition Systems
    Li, Zhuohang
    Shi, Cong
    Xie, Yi
    Liu, Jian
    Yuan, Bo
    Chen, Yingying
    PROCEEDINGS OF THE 21ST INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS (HOTMOBILE'20), 2020, : 9 - 14
  • [8] Pairing Weak with Strong: Twin Models for Defending against Adversarial Attack on Speaker Verification
    Peng, Zhiyuan
    Li, Xu
    Lee, Tan
    INTERSPEECH 2021, 2021, : 4284 - 4288
  • [9] Adversarial Optimization for Dictionary Attacks on Speaker Verification
    Marras, Mirko
    Korus, Pawel
    Memon, Nasir
    Fenu, Gianni
    INTERSPEECH 2019, 2019, : 2913 - 2917
  • [10] MEH-FEST-NA: An Ensemble Defense System Against Adversarial Attacks in Speaker Verification Systems
    Chen, Zesheng
    Li, Jack
    Chen, Chao
    2024 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2024, : 29 - 36