Adversarial Robustness for Latent Models: Revisiting the Robust-Standard Accuracies Tradeoff

被引:0
|
作者
Javanmard, Adel [1 ]
Mehrabi, Mohammad [1 ]
机构
[1] Univ Southern Calif, Data Sci & Operat Dept, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
adversarial training; robust machine learning; low-dimensional structures; classification;
D O I
10.1287/opre.2022.0162
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Over the past few years, several adversarial training methods have been proposed to improve the robustness of machine learning models against adversarial perturbations in the input. Despite remarkable progress in this regard, adversarial training is often observed to drop the standard test accuracy. This phenomenon has intrigued the research community to investigate the potential tradeoff between standard accuracy (a.k.a generalization) and robust accuracy (a.k.a robust generalization) as two performance measures. In this paper, we revisit this tradeoff for latent models and argue that this tradeoff is mitigated when the data enjoys a low-dimensional structure. In particular, we consider binary classification under two data generative models, namely Gaussian mixture model and generalized linear model, where the features data lie on a low-dimensional manifold. We develop a theory to show that the low-dimensional manifold structure allows one to obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures. We further corroborate our theory with several numerical experiments, including Mixture of Factor Analyzers (MFA) model trained on the MNIST data set.
引用
收藏
页码:1016 / 1030
页数:15
相关论文
共 9 条
  • [1] Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning
    Lechner, Mathias
    Amini, Alexander
    Rus, Daniela
    Henzinger, Thomas A.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (03) : 1595 - 1602
  • [2] Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff
    Suzuki, Satoshi
    Yamaguchi, Shin'ya
    Takeda, Shoichiro
    Kanai, Sekitoshi
    Makishima, Naoki
    Ando, Atsushi
    Masumura, Ryo
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4367 - 4378
  • [3] Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness
    Yue, Xinli
    Mou, Ningping
    Wang, Qian
    Zhao, Lingchen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better
    Zi, Bojia
    Zhao, Shihao
    Ma, Xingjun
    Jiang, Yu-Gang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16423 - 16432
  • [5] Rejoinder to "On the robustness of latent class models for diagnostic testing with no gold standard"
    Schofield, Matthew R.
    Maze, Michael J.
    Crump, John A.
    Rubach, Matthew P.
    Galloway, Renee L.
    Sharples, Katrina J.
    STATISTICS IN MEDICINE, 2021, 40 (22) : 4770 - 4771
  • [6] A cautionary note on the robustness of latent class models for estimating diagnostic error without a gold standard
    Albert, PS
    Dodd, LE
    BIOMETRICS, 2004, 60 (02) : 427 - 435
  • [7] Robustness in deep learning models for medical diagnostics: security and adversarial challenges towards robust AI applications
    Javed, Haseeb
    El-Sappagh, Shaker
    Abuhmed, Tamer
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [8] Commentary on "On the robustness of latent class models for diagnostic testing with no gold-standard" by Schofield et al.
    Dendukuri, Nandini
    STATISTICS IN MEDICINE, 2021, 40 (22) : 4766 - 4769
  • [9] Adversarial Robustness against Multiple and Single lp -Threat Models via Quick Fine-Tuning of Robust Classifiers
    Croce, Francesco
    Hein, Matthias
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,