Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation

被引:10
|
作者
Huang, Bo [1 ,2 ]
Chen, Mingyang [1 ,2 ]
Wang, Yi [3 ]
Lu, Junda [4 ]
Cheng, Minhao [2 ]
Wang, Wei [1 ,2 ]
机构
[1] Hong Kong Univ Sci & Technol Guangzhou, Guangzhou, Peoples R China
[2] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[3] Dongguan Univ Technol, Dongguan, Peoples R China
[4] Macquarie Univ, Sydney, NSW, Australia
关键词
D O I
10.1109/CVPR52729.2023.02363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Distilled student models in teacher-student architectures are widely considered for computational-effective deployment in real-time applications and edge devices. However, there is a higher risk of student models to encounter adversarial attacks at the edge. Popular enhancing schemes such as adversarial training have limited performance on compressed networks. Thus, recent studies concern about adversarial distillation (AD) that aims to inherit not only prediction accuracy but also adversarial robustness of a robust teacher model under the paradigm of robust optimization. In the min-max framework of AD, existing AD methods generally use fixed supervision information from the teacher model to guide the inner optimization for knowledge distillation which often leads to an overcorrection towards model smoothness. In this paper, we propose an adaptive adversarial distillation (AdaAD) that involves the teacher model in the knowledge optimization process in a way interacting with the student model to adaptively search for the inner results. Comparing with state-of-the-art methods, the proposed AdaAD can significantly boost both the prediction accuracy and adversarial robustness of student models in most scenarios. In particular, the ResNet-18 model trained by AdaAD achieves top-rank performance (54.23% robust accuracy) on RobustBench under AutoAttack.
引用
收藏
页码:24668 / 24677
页数:10
相关论文
共 50 条
  • [1] Boosting accuracy of student models via Masked Adaptive Self-Distillation
    Zhao, Haoran
    Tian, Shuwen
    Wang, Jinlong
    Deng, Zhaopeng
    Sun, Xin
    Dong, Junyu
    NEUROCOMPUTING, 2025, 637
  • [2] Enhanced Accuracy and Robustness via Multi-teacher Adversarial Distillation
    Zhao, Shiji
    Yu, Jie
    Sun, Zhenlong
    Zhang, Bo
    Wei, Xingxing
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 585 - 602
  • [3] Boosting adversarial robustness via self-paced adversarial training
    He, Lirong
    Ai, Qingzhong
    Yang, Xincheng
    Ren, Yazhou
    Wang, Qifan
    Xu, Zenglin
    NEURAL NETWORKS, 2023, 167 : 706 - 714
  • [4] Improving Adversarial Robustness via Information Bottleneck Distillation
    Kuang, Huafeng
    Liu, Hong
    Wu, YongJian
    Satoh, Shin'ichi
    Ji, Rongrong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Boosting adversarial robustness via feature refinement, suppression, and alignment
    Yulun Wu
    Yanming Guo
    Dongmei Chen
    Tianyuan Yu
    Huaxin Xiao
    Yuanhao Guo
    Liang Bai
    Complex & Intelligent Systems, 2024, 10 : 3213 - 3233
  • [6] Boosting adversarial robustness via feature refinement, suppression, and alignment
    Wu, Yulun
    Guo, Yanming
    Chen, Dongmei
    Yu, Tianyuan
    Xiao, Huaxin
    Guo, Yuanhao
    Bai, Liang
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3213 - 3233
  • [7] Improving Adversarial Robustness via Distillation-Based Purification
    Koo, Inhwa
    Chae, Dong-Kyu
    Lee, Sang-Chul
    Cascio, Donato
    APPLIED SCIENCES-BASEL, 2023, 13 (20):
  • [8] Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better
    Zi, Bojia
    Zhao, Shihao
    Ma, Xingjun
    Jiang, Yu-Gang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16423 - 16432
  • [9] Mitigating Accuracy-Robustness Trade-Off via Balanced Multi-Teacher Adversarial Distillation
    Zhao, Shiji
    Wang, Xizhe
    Wei, Xingxing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9338 - 9352
  • [10] BOOSTING NOISE ROBUSTNESS OF ACOUSTIC MODEL VIA DEEP ADVERSARIAL TRAINING
    Liu, Bin
    Nie, Shuai
    Zhang, Yaping
    Ke, Dengfeng
    Liang, Shan
    Liu, Wenju
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5034 - 5038