Sharpness-Aware Minimization Leads to Better Robustness in Meta-learning

被引:0
|
作者
Xu, Mengke [1 ]
Wang, Huiwei [2 ,3 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
[2] Chongqing Three Gorges Univ, Key Lab Intelligent Informat Proc, Chongqing 404100, Peoples R China
[3] Beijing Inst Technol, Chongqing Innovat Ctr, Chongqing 401120, Peoples R China
基金
中国博士后科学基金;
关键词
Meta-learning; R2D2; Sharpness-Aware Minimization;
D O I
10.1109/ICACI58115.2023.10146130
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transforming few-shot learning into meta-learning is an important way to narrow the gap between human ability and machine learning. In this paper, we study the adversarial robustness of meta-learning model and propose Defending R2D2 algorithm (DeR2D2) to resist attacks. We pay more attention to the two problems of adversarial meta-learning: the high training cost and the significant decrease of classification accuracy on clean samples. First, we demonstrate that the introduction of adversarial samples in R2D2 training can improve its adversarial robustness. Second, we choose Randomized Fast Gradient Sign Method (R+FGSM) instead of Projected Gradient Descent (PGD) as the adversarial training method, which significantly reduces the training cost. Finally, due to the Sharpness-Aware Minimization (SAM), our method further reduces adversarial training time and significantly improves the classification accuracy on clean samples. In addition, we verify that in most cases, DeR2D2 also has a strong ability to defend against attacks.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Federated Model-Agnostic Meta-Learning With Sharpness-Aware Minimization for Internet of Things Optimization
    Wu, Qingtao
    Zhang, Yong
    Liu, Muhua
    Zhu, Junlong
    Zheng, Ruijuan
    Zhang, Mingchuan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 31317 - 31330
  • [2] Random Sharpness-Aware Minimization
    Liu, Yong
    Mai, Siqi
    Cheng, Minhao
    Chen, Xiangning
    Hsieh, Cho-Jui
    You, Yang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] Sharpness-Aware Minimization Leads to Low-Rank Features
    Andriushchenko, Maksym
    Bahri, Dara
    Mobahi, Hossein
    Flammarion, Nicolas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Friendly Sharpness-Aware Minimization
    Li, Tao
    Zhou, Pan
    He, Zhengbao
    Cheng, Xinwen
    Huang, Xiaolin
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 5631 - 5640
  • [5] FedGAMMA: Federated Learning With Global Sharpness-Aware Minimization
    Dai, Rong
    Yang, Xun
    Sun, Yan
    Shen, Li
    Tian, Xinmei
    Wang, Meng
    Zhang, Yongdong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17479 - 17492
  • [6] Enhancing Sharpness-Aware Minimization by Learning Perturbation Radius
    Wang, Xuehao
    Jiang, Weisen
    Fu, Shuai
    Zhang, Yu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 375 - 391
  • [7] Convergence of Sharpness-Aware Minimization with Momentum
    Pham Duy Khanh
    Luong, Hoang-Chau
    Mordukhovich, Boris S.
    Dat Ba Tran
    Truc Vo
    INFORMATION TECHNOLOGIES AND THEIR APPLICATIONS, PT II, ITTA 2024, 2025, 2226 : 123 - 132
  • [8] Towards Understanding Sharpness-Aware Minimization
    Andriushchenko, Maksym
    Flammarion, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 639 - 668
  • [9] Sharpness-Aware Minimization and the Edge of Stability
    Long, Philip M.
    Bartlett, Peter L.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 20
  • [10] Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
    Chen, Zixiang
    Zhang, Junkai
    Kou, Yiwen
    Chen, Xiangning
    Hsieh, Cho-Jui
    Gu, Quanquan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,