Adversarial attacks for machine learning denoisers and how to resist them

被引:0
|
作者
Jain, Saiyam B. [1 ,3 ]
Shao, Zongru [1 ]
Veettil, Sachin K. T. [2 ,3 ]
Hecht, Michael [1 ,2 ]
机构
[1] Ctr Adv Syst Understanding CASUS, Gorlitz, Germany
[2] Ctr Syst Biol Dresden, Dresden, Germany
[3] Tech Univ Dresden, Fac Comp Sci, Dresden, Germany
关键词
Noise reduction; machine learning denoiser; instability phenomenon; adversarial attacks; IMAGE; CNN;
D O I
10.1117/12.2632954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
    Demetrio, Luca
    Biggio, Battista
    Roli, Fabio
    IEEE SECURITY & PRIVACY, 2022, 20 (05) : 77 - 85
  • [32] Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
    Newaz, A. K. M. Iqtidar
    Haque, Nur Imtiazul
    Sikder, Amit Kumar
    Rahman, Mohammad Ashiqur
    Uluagac, A. Selcuk
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [33] A Systematic Review of Adversarial Machine Learning Attacks, Defensive Controls, and Technologies
    Malik, Jasmita
    Muthalagu, Raja
    Pawar, Pranav M.
    IEEE ACCESS, 2024, 12 : 99382 - 99421
  • [34] A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks
    De Lucia, Michael J.
    Cotton, Chase
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 67 - 73
  • [35] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [36] Investigation of Deep Learning architectures and features for Adversarial Machine Learning Attacks in Modulation Classifications
    Aristodemou, Marios
    Lambotharan, Sangarapillai
    Zheng, Gan
    Aristodemou, Leonidas
    2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP), 2022,
  • [37] How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses
    Costa, Joana C.
    Roxo, Tiago
    Proenca, Hugo
    Inacio, Pedro Ricardo Morais
    IEEE ACCESS, 2024, 12 : 61113 - 61136
  • [38] Learning to Ignore Adversarial Attacks
    Zhang, Yiming
    Zhou, Yangqiaoyu
    Carton, Samuel
    Tan, Chenhao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2970 - 2984
  • [39] Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis
    Akter, Mst Shapna
    Shahriar, Hossain
    Iqbal, Iysa
    Hossain, M. D.
    Karim, M. A.
    Clincy, Victor
    Voicu, Razvan
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE SERVICES ENGINEERING, SSE, 2023, : 222 - 231
  • [40] MADVEX: Instrumentation-Based Adversarial Attacks on Machine Learning Malware Detection
    Loose, Nils
    Maechtle, Felix
    Pott, Claudius
    Bezsmertnyi, Volodymyr
    Eisenbarth, Thomas
    DETECTION OF INTRUSIONS AND MALWARE, AND VULNERABILITY ASSESSMENT, DIMVA 2023, 2023, 13959 : 69 - 88