Asymptotic Behavior of Adversarial Training in Binary Linear Classification

被引:1
|
作者
Taheri, Hossein [1 ]
Pedarsani, Ramtin [1 ]
Thrampoulidis, Christos [1 ,2 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
美国国家科学基金会;
关键词
~Adversarial learning; adversarial training; high-dimensional statistics; optimization;
D O I
10.1109/TNNLS.2023.3290592
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training using empirical risk minimization (ERM) is the state-of-the-art method for defense against adversarial attacks, that is, against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding the generalization properties of adversarial training in classification remains widely open. In this article, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under general lq-norm bounded perturbations (q = 1) in both discriminative binary models and generative Gaussian-mixture models with correlated features. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the over-parameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study the fundamental limits of adversarial training.
引用
收藏
页码:1 / 9
页数:9
相关论文
共 50 条
  • [1] The geometry of adversarial training in binary classification
    Bungert, Leon
    Trillos, Nicolas Garcia
    Murray, Ryan
    INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2023, 12 (02) : 921 - 968
  • [2] Meta-Adversarial Training of Neural Networks for Binary Classification
    Saadallah, Amal
    Morik, Katharina
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [4] Asymptotic behavior of normalized linear complexity of ultimately nonperiodic binary sequences
    Dai, ZD
    Jiang, SQ
    Imamura, K
    Gong, G
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2004, 50 (11) : 2911 - 2915
  • [5] On behavior classification in adversarial environments
    Riley, P
    Veloso, M
    DISTRIBUTED AUTONOMOUS ROBOTIC SYSTEMS, 2000, : 371 - 380
  • [6] An Adversarial Training Framework for Relation Classification
    Liu, Wenpeng
    Cao, Yanan
    Cao, Cong
    Liu, Yanbing
    Hu, Yue
    Guo, Li
    COMPUTATIONAL SCIENCE - ICCS 2018, PT II, 2018, 10861 : 194 - 205
  • [7] An adversarial training method for text classification
    Liu, Xiaoyang
    Dai, Shanghong
    Fiumara, Giacomo
    De Meo, Pasquale
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (08)
  • [8] Adversarial Training for Fake News Classification
    Tariq, Abdullah
    Mehmood, Abid
    Elhadef, Mourad
    Khan, Muhammad Usman Ghani
    IEEE ACCESS, 2022, 10 : 82706 - 82715
  • [9] Improvements to adversarial training for text classification
    He, Jia-Long
    Zhang, Xiao-Lin
    Wang, Yong-Ping
    Gu, Rui-Chun
    Liu, Li-Xin
    Xu, En-Hui
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2024, 46 (02) : 5191 - 5202
  • [10] The Adversarial Consistency of Surrogate Risks for Binary Classification
    Frank, Natalie S.
    Niles-Weed, Jonathan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,