Adversarial attacks for machine learning denoisers and how to resist them

被引:0
|
作者
Jain, Saiyam B. [1 ,3 ]
Shao, Zongru [1 ]
Veettil, Sachin K. T. [2 ,3 ]
Hecht, Michael [1 ,2 ]
机构
[1] Ctr Adv Syst Understanding CASUS, Gorlitz, Germany
[2] Ctr Syst Biol Dresden, Dresden, Germany
[3] Tech Univ Dresden, Fac Comp Sci, Dresden, Germany
关键词
Noise reduction; machine learning denoiser; instability phenomenon; adversarial attacks; IMAGE; CNN;
D O I
10.1117/12.2632954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [22] Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
    Gurel, Nezihe Merve
    Qi, Xiangyu
    Rimanic, Luka
    Zhang, Ce
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
    Goldblum, Micah
    Schwarzschild, Avi
    Patel, Ankit
    Goldstein, Tom
    ICAIF 2021: THE SECOND ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, 2021,
  • [24] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [25] Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
    Mumcu, Furkan
    Doshi, Keval
    Yilmaz, Yasin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 205 - 212
  • [26] Adversarial Machine Learning Attacks on Multiclass Classification of IoT Network Traffic
    Pantelakis, Vasileios
    Bountakas, Panagiotis
    Farao, Aristeidis
    Xenakis, Christos
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [27] A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning
    Sadeghi, Koosha
    Banerjee, Ayan
    Gupta, Sandeep K. S.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (04): : 450 - 467
  • [28] Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
    Rosenberg, Ishai
    Shabtai, Asaf
    Elovici, Yuval
    Rokach, Lior
    ACM COMPUTING SURVEYS, 2021, 54 (05)
  • [29] Stealing Machine Learning Models: Attacks and Countermeasures for Generative Adversarial Networks
    Hu, Hailong
    Pang, Jun
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 1 - 16
  • [30] Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems
    Anthi, Eirini
    Williams, Lowri
    Rhode, Matilda
    Burnap, Pete
    Wedgbury, Adam
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58