Few-shot Learning with Online Self-Distillation

被引:9
|
作者
Liu, Sihan [1 ]
Wang, Yue [2 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
D O I
10.1109/ICCVW54120.2021.00124
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning has been a long-standing problem in learning to learn. This problem typically involves training a model on an extremely small amount of data and testing the model on the out-of-distribution data. The focus of recent few-shot learning research has been on the development of good representation models that can quickly adapt to test tasks. To that end, we come up with a model that learns representation through online self-distillation. Our model combines supervised training with knowledge distillation via a continuously updated teacher. We also identify that data augmentation plays an important role in producing robust features. Our final model is trained with CutMix augmentation and online self-distillation. On the commonly used benchmark minilmageNet, our model achieves 67.07% and 83.03% under the 5-way 1-shot setting and the 5-way 5-shot setting, respectively. It outperforms counterparts of its kind by 2.25% and 0.89%.
引用
收藏
页码:1067 / 1070
页数:4
相关论文
共 50 条
  • [1] Self-Distillation for Few-Shot Image Captioning
    Chen, Xianyu
    Jiang, Ming
    Zhao, Qi
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 545 - 555
  • [2] Contrastive knowledge-augmented self-distillation approach for few-shot learning
    Zhang, Lixu
    Shao, Mingwen
    Chen, Sijie
    Liu, Fukang
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
  • [3] Few-Shot Learning Based on Dimensionally Enhanced Attention and Logit Standardization Self-Distillation
    Tang, Yuhong
    Li, Guang
    Zhang, Ming
    Li, Jianjun
    ELECTRONICS, 2024, 13 (15)
  • [4] Task-Agnostic Self-Distillation for Few-Shot Action Recognition
    Bin Zhang
    Dan, Yuanjie
    Chen, Peng
    Li, Ronghua
    Gao, Nan
    Hum, Ruohong
    He, Xiaofei
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5425 - 5433
  • [5] A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation
    Zhao, Qi
    Liu, Binghao
    Lyu, Shuchang
    Chen, Huojin
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (01) : 177 - 189
  • [6] Few-Shot Learning Based on Embedded Self-Distillation and Adaptive Wasserstein Distance for Hyperspectral Image Classification
    Li, Wenjie
    Shang, Shizhe
    Shang, Ronghua
    Feng, Dongzhu
    Zhang, Weitong
    Wang, Chao
    Feng, Jie
    Xu, Songhua
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [7] M2SD:Multiple Mixing Self-Distillation for Few-Shot Class-Incremental Learning
    Lin, Jinhao
    Wu, Ziheng
    Lin, Weifeng
    Huang, Jun
    Luo, RongHua
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3422 - 3431
  • [8] Multi Aspect Attention Online Integrated Distillation For Few-shot Learning
    Wang, Cailing
    Wei, Qingchen
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 4047 - 4051
  • [9] Efficient-PrototypicalNet with self knowledge distillation for few-shot learning
    Lim, Jit Yan
    Lim, Kian Ming
    Ooi, Shih Yin
    Lee, Chin Poo
    NEUROCOMPUTING, 2021, 459 : 327 - 337
  • [10] Enhanced ProtoNet With Self-Knowledge Distillation for Few-Shot Learning
    Habib, Mohamed El Hacen
    Kucukmanisa, Ayhan
    Urhan, Oguzhan
    IEEE ACCESS, 2024, 12 : 145331 - 145340