Defense strategies for Adversarial Machine Learning: A survey

被引:16
|
作者
Bountakas, Panagiotis [1 ]
Zarras, Apostolis [1 ]
Lekidis, Alexios [1 ]
Xenakis, Christos [1 ]
机构
[1] Univ Piraeus, Dept Digital Syst, 80 Karaoli & Dimitriou, Piraeus 18534, Attica, Greece
基金
欧盟地平线“2020”;
关键词
Survey; Machine Learning; Adversarial Machine Learning; Defense methods; Computer vision; Cybersecurity; Natural Language Processing; Audio; DETECTION SYSTEMS; ATTACKS; INTRUSION; ROBUST; CLASSIFICATION; SECURITY;
D O I
10.1016/j.cosrev.2023.100573
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial Machine Learning (AML) is a recently introduced technique, aiming to deceive Machine Learning (ML) models by providing falsified inputs to render those models ineffective. Consequently, most researchers focus on detecting new AML attacks that can undermine existing ML infrastructures, overlooking at the same time the significance of defense strategies. This article constitutes a survey of the existing literature on AML attacks and defenses with a special focus on a taxonomy of recent works on AML defense techniques for different application domains, such as audio, cyber-security, NLP, and computer vision. The proposed survey also explores the methodology of the defense solutions and compares them using several criteria, such as whether they are attack- and/or domain-agnostic, deploy appropriate AML evaluation metrics, and whether they share their source code and/or their evaluation datasets. To the best of our knowledge, this article constitutes the first survey that seeks to systematize the existing knowledge focusing solely on the defense solutions against AML and providing innovative directions for future research on tackling the increasing threat of AML. & COPY; 2023 Elsevier Inc. All rights reserved.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks
    De Lucia, Michael J.
    Cotton, Chase
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 67 - 73
  • [22] A comprehensive survey on regularization strategies in machine learning
    Tian, Yingjie
    Zhang, Yuqi
    INFORMATION FUSION, 2022, 80 : 146 - 166
  • [23] HyperAdv: Dynamic Defense Against Adversarial Radio Frequency Machine Learning Systems
    Zhang, Milin
    De Lucia, Michael
    Swami, Ananthram
    Ashdown, Jonathan
    Turck, Kurt
    Restuccia, Francesco
    MILCOM 2024-2024 IEEE MILITARY COMMUNICATIONS CONFERENCE, MILCOM, 2024, : 821 - 826
  • [24] Using Undervolting as an on-Device Defense Against Adversarial Machine Learning Attacks
    Majumdar, Saikat
    Samavatian, Mohammad Hossein
    Barber, Kristin
    Teodorescu, Radu
    2021 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), 2021, : 158 - 169
  • [25] Adversarial Deep Learning for Cognitive Radio Security: Jamming Attack and Defense Strategies
    Shi, Yi
    Sagduyu, Yalin E.
    Erpek, Tugba
    Davaslioglu, Kemal
    Lu, Zhuo
    Li, Jason H.
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [26] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [27] Adversarial Machine Learning
    Tygar, J. D.
    IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [28] Adversarial Machine Learning for Network Intrusion Detection Systems: A Comprehensive Survey
    He, Ke
    Kim, Dan Dongseong
    Asghar, Muhammad Rizwan
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01): : 538 - 566
  • [29] A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks
    Dasgupta, Prithviraj
    Collins, Joseph B.
    AI MAGAZINE, 2019, 40 (02) : 31 - 43
  • [30] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529