Sharpness-Aware Minimization Leads to Better Robustness in Meta-learning

被引:0
|
作者
Xu, Mengke [1 ]
Wang, Huiwei [2 ,3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
[2] Chongqing Three Gorges Univ, Key Lab Intelligent Informat Proc, Chongqing 404100, Peoples R China
[3] Beijing Inst Technol, Chongqing Innovat Ctr, Chongqing 401120, Peoples R China
基金
中国博士后科学基金;
关键词
Meta-learning; R2D2; Sharpness-Aware Minimization;
D O I
10.1109/ICACI58115.2023.10146130
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transforming few-shot learning into meta-learning is an important way to narrow the gap between human ability and machine learning. In this paper, we study the adversarial robustness of meta-learning model and propose Defending R2D2 algorithm (DeR2D2) to resist attacks. We pay more attention to the two problems of adversarial meta-learning: the high training cost and the significant decrease of classification accuracy on clean samples. First, we demonstrate that the introduction of adversarial samples in R2D2 training can improve its adversarial robustness. Second, we choose Randomized Fast Gradient Sign Method (R+FGSM) instead of Projected Gradient Descent (PGD) as the adversarial training method, which significantly reduces the training cost. Finally, due to the Sharpness-Aware Minimization (SAM), our method further reduces adversarial training time and significantly improves the classification accuracy on clean samples. In addition, we verify that in most cases, DeR2D2 also has a strong ability to defend against attacks.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Binary Quantized Network Training With Sharpness-Aware Minimization
    Liu, Ren
    Bian, Fengmiao
    Zhang, Xiaoqun
    JOURNAL OF SCIENTIFIC COMPUTING, 2023, 94 (01)
  • [22] Detection method of wheat rust based on transfer learning and sharpness-aware minimization
    Xu, Zhengguo
    Pan, Hengtuo
    Ye, Wei
    Xu, Zhuangwei
    Wang, Hongkai
    PLANT PATHOLOGY, 2023, 72 (02) : 353 - 360
  • [23] Binary Quantized Network Training With Sharpness-Aware Minimization
    Ren Liu
    Fengmiao Bian
    Xiaoqun Zhang
    Journal of Scientific Computing, 2023, 94
  • [24] Adaptive Sharpness-Aware Minimization for Adversarial Domain Generalization
    Xie, Tianci
    Li, Tao
    Wu, Ruoxue
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024,
  • [25] SAR: Sharpness-Aware minimization for enhancing DNNs' Robustness against bit-flip errors
    Zhou, Changbao
    Du, Jiawei
    Yan, Ming
    Yue, Hengshan
    Wei, Xiaohui
    Zhou, Joey Tianyi
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 156
  • [26] AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for neural networks
    Sun, Hao
    Shen, Li
    Zhong, Qihuang
    Ding, Liang
    Chen, Shixiang
    Sun, Jingwei
    Li, Jing
    Sun, Guangzhong
    Tao, Dacheng
    NEURAL NETWORKS, 2024, 169 : 506 - 519
  • [27] CR-SAM: Curvature Regularized Sharpness-Aware Minimization
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C., II
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6144 - 6152
  • [28] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
    Mi, Peng
    Shen, Li
    Ren, Tianhe
    Zhou, Yiyi
    Sun, Xiaoshuai
    Ji, Rongrong
    Tao, Dacheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning
    Abbas, Momin
    Xiao, Quan
    Chen, Lisha
    Chen, Pin-Yu
    Chen, Tianyi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10 - 32
  • [30] ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks
    Kwon, Jungmin
    Kim, Jeongseop
    Park, Hyunseo
    Choi, In Kwon
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139