Adversarial Robustness against Multiple and Single lp -Threat Models via Quick Fine-Tuning of Robust Classifiers

被引:0
|
作者
Croce, Francesco [1 ]
Hein, Matthias [1 ]
机构
[1] Univ Tubingen, Tubingen, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A major drawback of adversarially robust models, in particular for large scale datasets like ImageNet, is the extremely long training time compared to standard ones. Moreover, models should be robust not only to one l(p) -threat model but ideally to all of them. In this paper we propose Extreme norm Adversarial Training (E-AT) for multiple-norm robustness which is based on geometric properties of l(p)-balls. E-AT costs up to three times less than other adversarial training methods for multiple-norm robustness. Using E-AT we show that for ImageNet a single epoch and for CIFAR-10 three epochs are sufficient to turn any l(p)-robust model into a multiplenorm robust model. In this way we get the first multiple-norm robust model for ImageNet and boost the state-of-the-art for multiple-norm robustness to more than 51% on CIFAR-10. Finally, we study the general transfer via finetuning of adversarial robustness between different individual lp -threat models and improve the previous SOTA l1 -robustness on both CIFAR-10 and ImageNet. Extensive experiments show that our scheme works across datasets and architectures including vision transformers.
引用
收藏
页数:19
相关论文
共 3 条
  • [1] Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning
    Zhu, Kaijie
    Hu, Xixu
    Wang, Jindong
    Xie, Xing
    Yang, Ge
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4401 - 4411
  • [2] Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance
    Wang, Song
    Tan, Zhen
    Guo, Ruocheng
    Li, Jundong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12528 - 12540
  • [3] IFM: Integrating and fine-tuning adversarial examples of recommendation system under multiple models to enhance their transferability
    Qian, Fulan
    Cui, Yan
    Xu, Mengyao
    Chen, Hai
    Chen, Wenbin
    Xu, Qian
    Wu, Caihong
    Yan, Yuanting
    Zhao, Shu
    KNOWLEDGE-BASED SYSTEMS, 2025, 311