Analysis of classifiers’ robustness to adversarial perturbations

被引:0
|
作者
Alhussein Fawzi
Omar Fawzi
Pascal Frossard
机构
[1] EPFL,Signal Processing Laboratory (LTS4)
[2] ENS de Lyon,LIP
来源
Machine Learning | 2018年 / 107卷
关键词
Adversarial examples; Classification robustness; Random noise; Instability; Deep networks;
D O I
暂无
中图分类号
学科分类号
摘要
The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{d}$$\end{document} (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes.
引用
收藏
页码:481 / 508
页数:27
相关论文
共 50 条
  • [1] Analysis of classifiers' robustness to adversarial perturbations
    Fawzi, Alhussein
    Fawzi, Omar
    Frossard, Pascal
    MACHINE LEARNING, 2018, 107 (03) : 481 - 508
  • [2] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457
  • [3] On the robustness of randomized classifiers to adversarial examples
    Rafael Pinot
    Laurent Meunier
    Florian Yger
    Cédric Gouy-Pailler
    Yann Chevaleyre
    Jamal Atif
    Machine Learning, 2022, 111 : 3425 - 3457
  • [4] Lower bounds on the robustness to adversarial perturbations
    Peck, Jonathan
    Roels, Joris
    Goossens, Bart
    Saeys, Yvan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Exploiting Joint Robustness to Adversarial Perturbations
    Dabouei, Ali
    Soleymani, Sobhan
    Taherkhani, Fariborz
    Dawson, Jeremy
    Nasrabadi, Nasser M.
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1119 - 1128
  • [6] On the Robustness of Randomized Ensembles to Adversarial Perturbations
    Dbouk, Hassan
    Shanbhag, Naresh R.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [7] Adversarial Training and Robustness for Multiple Perturbations
    Tramer, Florian
    Boneh, Dan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Universal adversarial examples and perturbations for quantum classifiers
    Gong, Weiyuan
    Deng, Dong-Ling
    NATIONAL SCIENCE REVIEW, 2022, 9 (06)
  • [9] Robustness of classifiers: from adversarial to random noise
    Fawzi, Alhussein
    Moosayi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [10] Evaluating the adversarial robustness of Arabic spam classifiers
    Anwar Alajmi
    Imtiaz Ahmad
    Ameer Mohammed
    Neural Computing and Applications, 2025, 37 (6) : 4323 - 4343