Defend Data Poisoning Attacks on Voice Authentication

被引:1
|
作者
Li, Ke [1 ]
Baird, Cameron [1 ]
Lin, Dan [1 ]
机构
[1] Vanderbilt Univ, CS Dept, Nashville, TN 67240 USA
关键词
Authentication; Data models; Passwords; Training; Web services; Speech recognition; Neural networks; Voice authentication; deep neural networks; data poisoning attacks; SUPPORT VECTOR MACHINES; SPEAKER RECOGNITION;
D O I
10.1109/TDSC.2023.3289446
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option in many scenes of our daily life, especially the growing market of web services. Compared to traditional passwords, "vocal passwords" are much more convenient as they relieve people from memorizing different passwords. However, new machine learning attacks are putting these voice authentication systems at risk. Without a strong security guarantee, attackers could access legitimate users' web accounts by fooling the deep neural network (DNN) based voice recognition models. In this article, we demonstrate an easy-to-implement data poisoning attack to the voice authentication system, which cannot be captured effectively by existing defense mechanisms. Thus, we also propose a more robust defense method called Guardian, a convolutional neural network-based discriminator. The Guardian discriminator integrates a series of novel techniques including bias reduction, input augmentation, and ensemble learning. Our approach is able to distinguish about 95% of attacked accounts from normal accounts, which is much more effective than existing approaches with only 60% accuracy.
引用
收藏
页码:1754 / 1769
页数:16
相关论文
共 50 条
  • [21] Data Poisoning Attacks in Gossip Learning
    Pham, Alexandre
    Potop-Butucaru, Maria
    Tixeuil, Sebastien
    Fdida, Serge
    ADVANCED INFORMATION NETWORKING AND APPLICATIONS, VOL 2, AINA 2024, 2024, 200 : 213 - 224
  • [22] A New Data Randomization Method to Defend Buffer Overflow Attacks
    Yan Fen
    Yuan Fuchao
    Shen Xiaobing
    Yin Xinchun
    Bing, Mao
    2010 INTERNATIONAL COLLOQUIUM ON COMPUTING, COMMUNICATION, CONTROL, AND MANAGEMENT (CCCM2010), VOL I, 2010, : 466 - 469
  • [23] Poisoning attacks on face authentication systems by using the generative deformation model
    Chan, Chak-Tong
    Huang, Szu-Hao
    Choy, Patrick Puiyui
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) : 29457 - 29476
  • [24] Poisoning attacks on face authentication systems by using the generative deformation model
    Chak-Tong Chan
    Szu-Hao Huang
    Patrick Puiyui Choy
    Multimedia Tools and Applications, 2023, 82 : 29457 - 29476
  • [25] Defend against silicon poisoning
    1600, Palladian Publications (22):
  • [26] Decentralized Learning Robust to Data Poisoning Attacks
    Mao, Yanwen
    Data, Deepesh
    Diggavi, Suhas
    Tabuada, Paulo
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 6788 - 6793
  • [27] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [28] Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
    Schwarzschild, Avi
    Goldblum, Micah
    Gupta, Arjun
    Dickerson, John P.
    Goldstein, Tom
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [29] Data Poisoning Attacks and Defenses to Crowdsourcing Systems
    Fang, Minghong
    Sun, Minghao
    Li, Qi
    Gong, Neil Zhenqiang
    Tian, Jin
    Liu, Jia
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 969 - 980
  • [30] Data Poisoning Attacks against Autoregressive Models
    Alfeld, Scott
    Zhu, Xiaojin
    Barford, Paul
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1452 - 1458