Deceiving supervised machine learning models via adversarial data poisoning attacks: a case study with USB keyboards

被引:3
|
作者
Chillara, Anil Kumar [1 ]
Saxena, Paresh [1 ]
Maiti, Rajib Ranjan [1 ]
Gupta, Manik [1 ]
Kondapalli, Raghu [2 ]
Zhang, Zhichao [2 ]
Kesavan, Krishnakumar [2 ]
机构
[1] BITS Pilani, CSIS Dept, Hyderabad 500078, Telangana, India
[2] Axiado Corp, 2610 Orchard Pkwy,3rd Fl, San Jose, CA 95134 USA
关键词
USB; Adversarial learning; Data poisoning attacks; Keystroke injection attacks; Supervised learning;
D O I
10.1007/s10207-024-00834-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices exploit this trust by disguising themselves as benign peripherals and covertly implanting malicious commands into connected host devices. Existing research employs supervised learning models to identify such malicious devices, but our study reveals a weakness in these models when faced with sophisticated data poisoning attacks. We propose, design and implement a sophisticated adversarial data poisoning attack to demonstrate how these models can be manipulated to misclassify an attack device as a benign device. Our method entails generating keystroke data using a microprogrammable keystroke attack device. We develop adversarial attacker by meticulously analyzing the data distribution of data features generated via USB keyboards from benign users. The initial training data is modified by exploiting firmware-level modifications within the attack device. Upon evaluating the models, our findings reveal a significant decrease from 99 to 53% in detection accuracy when an adversarial attacker is employed. This work highlights the critical need to reevaluate the dependability of machine learning-based USB threat detection mechanisms in the face of increasingly sophisticated attack methods. The vulnerabilities demonstrated highlight the importance of developing more robust and resilient detection strategies to protect against the evolution of malicious USB devices.
引用
收藏
页码:2043 / 2061
页数:19
相关论文
共 50 条
  • [41] Adversarial Attacks Against Reinforcement Learning Based Tactical Networks: A Case Study
    Loevenich, Johannes F.
    Bode, Jonas
    Huerten, Tobias
    Liberto, Luca
    Spelter, Florian
    Rettore, Paulo H. L.
    Lopes, Roberto Rigolin F.
    2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2022,
  • [42] Locational detection of the false data injection attacks via semi-supervised multi-label adversarial network
    Feng, Hantong
    Han, Yinghua
    Li, Keke
    Si, Fangyuan
    Zhao, Qiang
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 155
  • [43] CARL: Unsupervised Code-Based Adversarial Attacks for Programming Language Models via Reinforcement Learning
    Yao, Kaich un
    Wang, Hao
    Qin, Chuan
    Zh, Hengshu
    Wu, Yanjun
    Zhang, Libo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (01)
  • [44] Predicting the band gap of ZnO quantum dots via supervised machine learning models
    Regonia, Paul Rossener
    Pelicano, Christian Mark
    Tani, Ryosuke
    Ishizumi, Atsushi
    Yanagi, Hisao
    Ikeda, Kazushi
    OPTIK, 2020, 207 (207):
  • [45] Cybersecurity in Smart Grids: Detecting False Data Injection Attacks Utilizing Supervised Machine Learning Techniques
    Shees, Anwer
    Tariq, Mohd
    Sarwat, Arif I.
    ENERGIES, 2024, 17 (23)
  • [46] On the Use of VGs for Feature Selection in Supervised Machine Learning - A Use Case to Detect Distributed DoS Attacks
    Lopes, Joao
    Partida, Alberto
    Pinto, Pedro
    Pinto, Antonio
    OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT I, OL2A 2023, 2024, 1981 : 269 - 283
  • [47] Predicting Breast Cancer via Supervised Machine Learning Methods on Class Imbalanced Data
    Rajendran, Keerthana
    Jayabalan, Manoj
    Thiruchelvam, Vinesh
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2020, 11 (08) : 54 - 63
  • [48] Adversarial Reinforcement Learning Based Data Poisoning Attacks Defense for Task-Oriented Multi-User Semantic Communication
    Peng, Jincheng
    Xing, Huanlai
    Xu, Lexi
    Luo, Shouxi
    Dai, Penglin
    Feng, Li
    Song, Jing
    Zhao, Bowen
    Xiao, Zhiwen
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 14834 - 14851
  • [49] Machine Learning Occupancy Prediction Models - A Case Study
    Alfalah, Bashar
    Shahrestani, Mehdi
    Shao, Li
    ASHRAE TRANSACTIONS 2023, VOL 129, PT 1, 2023, 129 : 694 - 702
  • [50] Enhancing Transferability of Black-box Adversarial Attacks via Lifelong Learning for Speech Emotion Recognition Models
    Ren, Zhao
    Han, Jing
    Cummins, Nicholas
    Schuller, Bjoern W.
    INTERSPEECH 2020, 2020, : 496 - 500