Adversarial attacks for machine learning denoisers and how to resist them

被引:0
|
作者
Jain, Saiyam B. [1 ,3 ]
Shao, Zongru [1 ]
Veettil, Sachin K. T. [2 ,3 ]
Hecht, Michael [1 ,2 ]
机构
[1] Ctr Adv Syst Understanding CASUS, Gorlitz, Germany
[2] Ctr Syst Biol Dresden, Dresden, Germany
[3] Tech Univ Dresden, Fac Comp Sci, Dresden, Germany
关键词
Noise reduction; machine learning denoiser; instability phenomenon; adversarial attacks; IMAGE; CNN;
D O I
10.1117/12.2632954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [2] Enablers Of Adversarial Attacks in Machine Learning
    Izmailov, Rauf
    Sugrim, Shridatt
    Chadha, Ritu
    McDaniel, Patrick
    Swami, Ananthram
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 425 - 430
  • [3] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [4] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38
  • [5] Handling the adversarial attacks: A machine learning's perspective
    Cao, Ning
    Li, Guofu
    Zhu, Pengjia
    Sun, Qian
    Wang, Yingying
    Li, Jing
    Yan, Maoling
    Zhao, Yongbin
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2019, 10 (08) : 2929 - 2943
  • [6] Adversarial Machine Learning Attacks in Internet of Things Systems
    Kone, Rachida
    Toutsop, Otily
    Thierry, Ketchiozo Wandji
    Kornegay, Kevin
    Falaye, Joy
    2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
  • [7] Robust in practice: Adversarial attacks on quantum machine learning
    Liao, Haoran
    Convy, Ian
    Huggins, William J.
    Whaley, K. Birgitta
    PHYSICAL REVIEW A, 2021, 103 (04)
  • [8] Adversarial attacks on machine learning-aided visualizations
    Fujiwara, Takanori
    Kucher, Kostiantyn
    Wang, Junpeng
    Martins, Rafael M.
    Kerren, Andreas
    Ynnerman, Anders
    JOURNAL OF VISUALIZATION, 2025, 28 (01) : 133 - 151
  • [9] An Obfuscated Challenge Design for APUF to Resist Machine Learning Attacks
    Chen, Bo
    Wang, Pengjun
    Li, Gang
    2019 IEEE 13TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2019,
  • [10] Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks
    Sivaslioglu, Salved
    Catak, Ferhat Ozgur
    Gul, Ensar
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,