Defend Data Poisoning Attacks on Voice Authentication

被引:1
|
作者
Li, Ke [1 ]
Baird, Cameron [1 ]
Lin, Dan [1 ]
机构
[1] Vanderbilt Univ, CS Dept, Nashville, TN 67240 USA
关键词
Authentication; Data models; Passwords; Training; Web services; Speech recognition; Neural networks; Voice authentication; deep neural networks; data poisoning attacks; SUPPORT VECTOR MACHINES; SPEAKER RECOGNITION;
D O I
10.1109/TDSC.2023.3289446
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option in many scenes of our daily life, especially the growing market of web services. Compared to traditional passwords, "vocal passwords" are much more convenient as they relieve people from memorizing different passwords. However, new machine learning attacks are putting these voice authentication systems at risk. Without a strong security guarantee, attackers could access legitimate users' web accounts by fooling the deep neural network (DNN) based voice recognition models. In this article, we demonstrate an easy-to-implement data poisoning attack to the voice authentication system, which cannot be captured effectively by existing defense mechanisms. Thus, we also propose a more robust defense method called Guardian, a convolutional neural network-based discriminator. The Guardian discriminator integrates a series of novel techniques including bias reduction, input augmentation, and ensemble learning. Our approach is able to distinguish about 95% of attacked accounts from normal accounts, which is much more effective than existing approaches with only 60% accuracy.
引用
收藏
页码:1754 / 1769
页数:16
相关论文
共 50 条
  • [41] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [42] Crowdsourcing Under Data Poisoning Attacks: A Comparative Study
    Tahmasebian, Farnaz
    Xiong, Li
    Sotoodeh, Mani
    Sunderam, Vaidy
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXIV, DBSEC 2020, 2020, 12122 : 310 - 332
  • [43] Data Poisoning Attacks on Cross-domain Recommendation
    Chen, Huiyuan
    Li, Jing
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2177 - 2180
  • [44] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [45] Towards Data Poisoning Attacks in Crowd Sensing Systems
    Miao, Chenglin
    Li, Qi
    Xiao, Houping
    Jiang, Wenjun
    Huai, Mengdi
    Su, Lu
    PROCEEDINGS OF THE 2018 THE NINETEENTH INTERNATIONAL SYMPOSIUM ON MOBILE AD HOC NETWORKING AND COMPUTING (MOBIHOC '18), 2018, : 111 - 120
  • [46] Data Poisoning based Backdoor Attacks to Contrastive Learning
    Zhang, Jinghuai
    Liu, Hongbin
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24357 - 24366
  • [47] Data Poisoning Attacks on Regression Learning and Corresponding Defenses
    Mueller, Nicolas
    Kowatsch, Daniel
    Boettinger, Konstantin
    2020 IEEE 25TH PACIFIC RIM INTERNATIONAL SYMPOSIUM ON DEPENDABLE COMPUTING (PRDC 2020), 2020, : 80 - 89
  • [48] Data poisoning attacks in intelligent transportation systems: A survey
    Wang, Feilong
    Wang, Xin
    Ban, Xuegang
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 165
  • [49] Demystifying Data Poisoning Attacks in Distributed Learning as a Service
    Wei, Wenqi
    Chow, Ka-Ho
    Wu, Yanzhao
    Liu, Ling
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (01) : 237 - 250
  • [50] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)