Deceiving supervised machine learning models via adversarial data poisoning attacks: a case study with USB keyboards

被引:3
|
作者
Chillara, Anil Kumar [1 ]
Saxena, Paresh [1 ]
Maiti, Rajib Ranjan [1 ]
Gupta, Manik [1 ]
Kondapalli, Raghu [2 ]
Zhang, Zhichao [2 ]
Kesavan, Krishnakumar [2 ]
机构
[1] BITS Pilani, CSIS Dept, Hyderabad 500078, Telangana, India
[2] Axiado Corp, 2610 Orchard Pkwy,3rd Fl, San Jose, CA 95134 USA
关键词
USB; Adversarial learning; Data poisoning attacks; Keystroke injection attacks; Supervised learning;
D O I
10.1007/s10207-024-00834-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices exploit this trust by disguising themselves as benign peripherals and covertly implanting malicious commands into connected host devices. Existing research employs supervised learning models to identify such malicious devices, but our study reveals a weakness in these models when faced with sophisticated data poisoning attacks. We propose, design and implement a sophisticated adversarial data poisoning attack to demonstrate how these models can be manipulated to misclassify an attack device as a benign device. Our method entails generating keystroke data using a microprogrammable keystroke attack device. We develop adversarial attacker by meticulously analyzing the data distribution of data features generated via USB keyboards from benign users. The initial training data is modified by exploiting firmware-level modifications within the attack device. Upon evaluating the models, our findings reveal a significant decrease from 99 to 53% in detection accuracy when an adversarial attacker is employed. This work highlights the critical need to reevaluate the dependability of machine learning-based USB threat detection mechanisms in the face of increasingly sophisticated attack methods. The vulnerabilities demonstrated highlight the importance of developing more robust and resilient detection strategies to protect against the evolution of malicious USB devices.
引用
收藏
页码:2043 / 2061
页数:19
相关论文
共 50 条
  • [1] Ethics of Adversarial Machine Learning and Data Poisoning
    Laurynas Adomaitis
    Rajvardhan Oak
    Digital Society, 2023, 2 (1):
  • [2] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [3] Adversarial data poisoning attacks against the PC learning algorithm
    Alsuwat, Emad
    Alsuwat, Hatim
    Valtorta, Marco
    Farkas, Csilla
    INTERNATIONAL JOURNAL OF GENERAL SYSTEMS, 2020, 49 (01) : 3 - 31
  • [4] Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
    Demetrio, Luca
    Biggio, Battista
    Roli, Fabio
    IEEE SECURITY & PRIVACY, 2022, 20 (05) : 77 - 85
  • [5] Exploring Targeted and Stealthy False Data Injection Attacks via Adversarial Machine Learning
    Tian, Jiwei
    Wang, Buhong
    Li, Jing
    Wang, Zhen
    Ma, Bowen
    Ozay, Mete
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 14116 - 14125
  • [6] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [7] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [8] Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems
    Venkatesan, Sridhar
    Sikka, Harshvardhan
    Izmailov, Rauf
    Chadha, Ritu
    Oprea, Alina
    de Lucia, Michael J.
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,
  • [9] Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things
    Dunn, Corey
    Moustafa, Nour
    Turnbull, Benjamin
    SUSTAINABILITY, 2020, 12 (16)
  • [10] Stealing Machine Learning Models: Attacks and Countermeasures for Generative Adversarial Networks
    Hu, Hailong
    Pang, Jun
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 1 - 16