Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

被引:72
|
作者
Lee, Saehyung
Lee, Hyungyu
Yoon, Sungroh [1 ]
机构
[1] Seoul Natl Univ, Elect & Comp Engn, ASRI, INMC, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR42600.2020.00035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Although adversarial training is one of the most effective forms of defense against adversarial examples, unfortunately, a large gap exists between test accuracy and training accuracy in adversarial training. In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model. Considering these theoretical results, we present soft labeling as a solution to the AFO problem. Furthermore, we propose Adversarial Vertex mixup (AVmixup), a soft-labeled data augmentation approach for improving adversarially robust generalization. We complement our theoretical analysis with experiments on CIFAR10, CIFAR100, SVHN, and Tiny ImageNet, and show that AVmixup significantly improves the robust generalization performance and that it reduces the trade-off between standard accuracy and adversarial robustness.
引用
收藏
页码:269 / 278
页数:10
相关论文
共 50 条
  • [1] Regional Adversarial Training for Better Robust Generalization
    Song, Chuanbiao
    Fan, Yanbo
    Zhou, Aoyang
    Wu, Baoyuan
    Li, Yiming
    Li, Zhifeng
    He, Kun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (10) : 4510 - 4520
  • [2] Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning
    Si, Chenglei
    Zhang, Zhengyan
    Qi, Fanchao
    Liu, Zhiyuan
    Wang, Yasheng
    Liu, Qun
    Sun, Maosong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1569 - 1576
  • [3] Rademacher Complexity for Adversarially Robust Generalization
    Yin, Dong
    Ramchandran, Kannan
    Bartlett, Peter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [4] Improved Generalization Bounds for Adversarially Robust Learning
    Attias, Idan
    Kontorovich, Aryeh
    Mansour, Yishay
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [5] Improved Generalization Bounds for Adversarially Robust Learning
    Attias, Idan
    Kontorovich, Aryeh
    Mansour, Yishay
    Journal of Machine Learning Research, 2022, 23
  • [6] Adversarially Robust Generalization Requires More Data
    Schmidt, Ludwig
    Santurkar, Shibani
    Tsipras, Dimitris
    Talwar, Kunal
    Madry, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [7] Relating Adversarially Robust Generalization to Flat Minima
    Stutz, David
    Hein, Matthias
    Schiele, Bernt
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7787 - 7797
  • [8] Pruning Adversarially Robust Neural Networks without Adversarial Examples
    Jian, Tong
    Wang, Zifeng
    Wang, Yanzhi
    Dy, Jennifer
    Ioannidis, Stratis
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 993 - 998
  • [9] Exploring the Relationship between Architectural Design and Adversarially Robust Generalization
    Liu, Aishan
    Tang, Shiyu
    Liang, Siyuan
    Gong, Ruihao
    Wu, Boxi
    Liu, Xianglong
    Tao, Dacheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4096 - 4107
  • [10] Adversarially robust generalization from network crowd noisy labels
    Ma, Chicheng
    Chen, Pengpeng
    Li, Wenfa
    Zhang, Xueyang
    Zhang, Yirui
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 114 : 711 - 718