Analysis of classifiers’ robustness to adversarial perturbations

被引:0
|
作者
Alhussein Fawzi
Omar Fawzi
Pascal Frossard
机构
[1] EPFL,Signal Processing Laboratory (LTS4)
[2] ENS de Lyon,LIP
来源
Machine Learning | 2018年 / 107卷
关键词
Adversarial examples; Classification robustness; Random noise; Instability; Deep networks;
D O I
暂无
中图分类号
学科分类号
摘要
The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{d}$$\end{document} (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes.
引用
收藏
页码:481 / 508
页数:27
相关论文
共 50 条
  • [21] Certifiable Robustness to Discrete Adversarial Perturbations for Factorization Machines
    Liu, Yang
    Xia, Xianzhuo
    Chen, Liang
    He, Xiangnan
    Yang, Carl
    Zheng, Zibin
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 419 - 428
  • [22] Robustness to Adversarial Perturbations in Learning from Incomplete Data
    Najafi, Amir
    Maeda, Shin-ichi
    Koyama, Masanori
    Miyato, Takeru
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [23] How to Compare Adversarial Robustness of Classifiers from a Global Perspective
    Risse, Niklas
    Goepfert, Christina
    Goepfert, Jan Philip
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 29 - 41
  • [24] Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations
    Hsiung, Lei
    Tsai, Yun-Yun
    Chen, Pin-Yu
    Ho, Tsung-Yi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24658 - 24667
  • [25] Universal adversarial perturbations for multiple classification tasks with quantum classifiers
    Qiu, Yun-Zhong
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (04):
  • [26] Towards Adversarial Robustness with Multidimensional Perturbations via Contrastive Learning
    Chen, Chuanxi
    Ye, Dengpan
    Wang, Hao
    Tang, Long
    Xu, Yue
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 184 - 191
  • [27] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [28] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [29] Adversarial Robustness of MR Image Reconstruction Under Realistic Perturbations
    Morshuis, Jan Nikolas
    Gatidis, Sergios
    Hein, Matthias
    Baumgartner, Christian F.
    MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION (MLMIR 2022), 2022, 13587 : 24 - 33
  • [30] Encoding Robustness to Image Style via Adversarial Feature Perturbations
    Shu, Manli
    Wu, Zuxuan
    Goldblum, Micah
    Goldstein, Tom
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34