Adversarial attacks for machine learning denoisers and how to resist them

被引:0
|
作者
Jain, Saiyam B. [1 ,3 ]
Shao, Zongru [1 ]
Veettil, Sachin K. T. [2 ,3 ]
Hecht, Michael [1 ,2 ]
机构
[1] Ctr Adv Syst Understanding CASUS, Gorlitz, Germany
[2] Ctr Syst Biol Dresden, Dresden, Germany
[3] Tech Univ Dresden, Fac Comp Sci, Dresden, Germany
关键词
Noise reduction; machine learning denoiser; instability phenomenon; adversarial attacks; IMAGE; CNN;
D O I
10.1117/12.2632954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
    Quiring, Erwin
    Klein, David
    Arp, Daniel
    Johns, Martin
    Rieck, Konrad
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1363 - 1380
  • [42] Protection against Adversarial Attacks on Malware Detectors Using Machine Learning Algorithms
    Marshev, I. I.
    Zhukovskii, E., V
    Aleksandrova, E. B.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2021, 55 (08) : 1025 - 1028
  • [43] ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints
    Guesmi, Amira
    Khasawneh, Khaled N.
    Abu-Ghazaleh, Nael
    Alouani, Ihsen
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images
    E. A. Rudnitskaya
    M. A. Poltavtseva
    Automatic Control and Computer Sciences, 2022, 56 : 934 - 941
  • [45] Detection of GPS Spoofing Attacks in UAVs Based on Adversarial Machine Learning Model
    Alhoraibi, Lamia
    Alghazzawi, Daniyal
    Alhebshi, Reemah
    SENSORS, 2024, 24 (18)
  • [46] Approach to Detecting Attacks against Machine Learning Systems with a Generative Adversarial Network
    Kotenko, I. V.
    Saenko, I. B.
    Lauta, O. S.
    Vasilev, N. A.
    Sadovnikov, V. E.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2024, 34 (03) : 589 - 596
  • [47] Adversarial Machine Learning Protection Using the Example of Evasion Attacks on Medical Images
    Rudnitskaya, E. A.
    Poltavtseva, M. A.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2022, 56 (08) : 934 - 941
  • [48] Adversarial Attacks in Explainable Machine Learning: A Survey of Threats Against Models and Humans
    Vadillo, Jon
    Santana, Roberto
    Lozano, Jose A.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2025, 15 (01)
  • [49] Enhanced Security Against Volumetric DDoS Attacks Using Adversarial Machine Learning
    Shroff, Jugal
    Walambe, Rahee
    Singh, Sunil Kumar
    Kotecha, Ketan
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [50] Effectiveness of machine learning based android malware detectors against adversarial attacks
    Jyothish, A.
    Mathew, Ashik
    Vinod, P.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (03): : 2549 - 2569