Defend Data Poisoning Attacks on Voice Authentication

被引:1
|
作者
Li, Ke [1 ]
Baird, Cameron [1 ]
Lin, Dan [1 ]
机构
[1] Vanderbilt Univ, CS Dept, Nashville, TN 67240 USA
关键词
Authentication; Data models; Passwords; Training; Web services; Speech recognition; Neural networks; Voice authentication; deep neural networks; data poisoning attacks; SUPPORT VECTOR MACHINES; SPEAKER RECOGNITION;
D O I
10.1109/TDSC.2023.3289446
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option in many scenes of our daily life, especially the growing market of web services. Compared to traditional passwords, "vocal passwords" are much more convenient as they relieve people from memorizing different passwords. However, new machine learning attacks are putting these voice authentication systems at risk. Without a strong security guarantee, attackers could access legitimate users' web accounts by fooling the deep neural network (DNN) based voice recognition models. In this article, we demonstrate an easy-to-implement data poisoning attack to the voice authentication system, which cannot be captured effectively by existing defense mechanisms. Thus, we also propose a more robust defense method called Guardian, a convolutional neural network-based discriminator. The Guardian discriminator integrates a series of novel techniques including bias reduction, input augmentation, and ensemble learning. Our approach is able to distinguish about 95% of attacked accounts from normal accounts, which is much more effective than existing approaches with only 60% accuracy.
引用
收藏
页码:1754 / 1769
页数:16
相关论文
共 50 条
  • [31] Concealed Data Poisoning Attacks on NLP Models
    Wallace, Eric
    Zhao, Tony Z.
    Feng, Shi
    Singh, Sameer
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 139 - 150
  • [32] Towards Poisoning of Federated Support Vector Machines with Data Poisoning Attacks
    Mouri, Israt Jahan
    Ridowan, Muhammad
    Adnan, Muhammad Abdullah
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SERVICES SCIENCE, CLOSER 2023, 2023, : 24 - 33
  • [33] Stronger data poisoning attacks break data sanitization defenses
    Koh, Pang Wei
    Steinhardt, Jacob
    Liang, Percy
    MACHINE LEARNING, 2022, 111 (01) : 1 - 47
  • [34] Stronger data poisoning attacks break data sanitization defenses
    Pang Wei Koh
    Jacob Steinhardt
    Percy Liang
    Machine Learning, 2022, 111 : 1 - 47
  • [35] Heart attacks - stem cells defend
    不详
    REGENERATIVE MEDICINE, 2006, 1 (06) : 752 - 752
  • [36] Defend GPUs Against DoS Attacks
    Zhang, Wei
    2013 IEEE 32ND INTERNATIONAL PERFORMANCE COMPUTING AND COMMUNICATIONS CONFERENCE (IPCCC), 2013,
  • [37] Data poisoning attacks on traffic state estimation and prediction
    Wang, Feilong
    Wang, Xin
    Hong, Yuan
    Rockafellar, R. Tyrrell
    Ban, Xuegang
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 168
  • [38] Data Poisoning and Backdoor Attacks on Audio Intelligence Systems
    Ge, Yunjie
    Wang, Qian
    Yu, Jiayuan
    Shen, Chao
    Li, Qi
    IEEE COMMUNICATIONS MAGAZINE, 2023, 61 (12) : 176 - 182
  • [39] Data Poisoning Attacks to Local Differential Privacy Protocols
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, 2021, : 947 - 964
  • [40] An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
    Farhadkhani, Sadegh
    Guerraoui, Rachid
    Hoang, Le-Nguyen
    Villemaud, Oscar
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,