Defending against FakeBob Adversarial Attacks in Speaker Verification Systems with Noise-Adding

被引:5
|
作者
Chen, Zesheng [1 ]
Chang, Li-Chi [1 ]
Chen, Chao [1 ]
Wang, Guoping [1 ]
Bi, Zhuming [1 ]
机构
[1] Purdue Univ Ft Wayne, Coll Engn Technol & Comp Sci, Ft Wayne, IN 46805 USA
关键词
speaker verification; FakeBob adversarial attacks; defense system; denoising; noiseadding; adaptive attacks; RECOGNITION; DEFENSES;
D O I
10.3390/a15080293
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. The goal of this work is to design and implement a simple and light-weight defense system that is effective against FakeBob. We specifically study two opposite pre-processing operations on input audios in speak verification systems: denoising that attempts to remove or reduce perturbations and noise-adding that adds small noise to an input audio. Through experiments, we demonstrate that both methods are able to weaken the ability of FakeBob attacks significantly, with noise-adding achieving even better performance than denoising. Specifically, with denoising, the targeted attack success rate of FakeBob attacks can be reduced from 100% to 56.05% in GMM speaker verification systems, and from 95% to only 38.63% in i-vector speaker verification systems, respectively. With noise adding, those numbers can be further lowered down to 5.20% and 0.50%, respectively. As a proactive measure, we study several possible adaptive FakeBob attacks against the noise-adding method. Experiment results demonstrate that noise-adding can still provide a considerable level of protection against these countermeasures.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Robust Speaker Verification Against Additive Noise
    Wang, Ming-He
    Zhang, Er-Hua
    Tang, Zhen-Min
    JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2019, 35 (02) : 291 - 305
  • [42] Defending Against Adversarial Attacks on Time-series with Selective Classification
    Kuehne, Joana
    Guehmann, Clemens
    2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 169 - 175
  • [43] Defending edge computing based metaverse AI against adversarial attacks
    Yi, Zhangao
    Qian, Yongfeng
    Chen, Min
    Alqahtani, Salman A.
    Hossain, M. Shamim
    AD HOC NETWORKS, 2023, 150
  • [44] Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior
    Yoo, Hwajung
    Hong, Pyo Min
    Kim, Taeyong
    Yoon, Jung Won
    Lee, Youn Kyu
    IEEE ACCESS, 2023, 11 : 78713 - 78725
  • [45] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [46] SATYA: Defending Against Adversarial Attacks Using Statistical Hypothesis Testing
    Raj, Sunny
    Pullum, Laura
    Ramanathan, Arvind
    Jha, Sumit Kumar
    FOUNDATIONS AND PRACTICE OF SECURITY (FPS 2017), 2018, 10723 : 277 - 292
  • [47] Defending Against Local Adversarial Attacks through Empirical Gradient Optimization
    Sun, Boyang
    Ma, Xiaoxuan
    Wang, Hengyou
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2023, 30 (06): : 1888 - 1898
  • [48] Defending Hardware-Based Malware Detectors Against Adversarial Attacks
    Kuruvila, Abraham Peedikayil
    Kundu, Shamik
    Basu, Kanad
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (09) : 1727 - 1739
  • [49] Improving Robustness of Facial Landmark Detection by Defending against Adversarial Attacks
    Zhu, Congcong
    Li, Xiaoqiang
    Li, Jide
    Dai, Songmin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 11731 - 11740
  • [50] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413