Few-shot Learning with Online Self-Distillation

被引:9
|
作者
Liu, Sihan [1 ]
Wang, Yue [2 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
D O I
10.1109/ICCVW54120.2021.00124
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning has been a long-standing problem in learning to learn. This problem typically involves training a model on an extremely small amount of data and testing the model on the out-of-distribution data. The focus of recent few-shot learning research has been on the development of good representation models that can quickly adapt to test tasks. To that end, we come up with a model that learns representation through online self-distillation. Our model combines supervised training with knowledge distillation via a continuously updated teacher. We also identify that data augmentation plays an important role in producing robust features. Our final model is trained with CutMix augmentation and online self-distillation. On the commonly used benchmark minilmageNet, our model achieves 67.07% and 83.03% under the 5-way 1-shot setting and the 5-way 5-shot setting, respectively. It outperforms counterparts of its kind by 2.25% and 0.89%.
引用
收藏
页码:1067 / 1070
页数:4
相关论文
共 50 条
  • [41] Learning self-target knowledge for few-shot segmentation
    Chen, Yadang
    Chen, Sihan
    Yang, Zhi-Xin
    Wu, Enhua
    PATTERN RECOGNITION, 2024, 149
  • [42] Reinforced Self-Supervised Training for Few-Shot Learning
    Yan, Zhichao
    An, Yuexuan
    Xue, Hui
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 731 - 735
  • [43] Conditional Self-Supervised Learning for Few-Shot Classification
    An, Yuexuan
    Xue, Hui
    Zhao, Xingyu
    Zhang, Lu
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2140 - 2146
  • [44] Self-Supervised Few-Shot Learning on Point Clouds
    Sharma, Charu
    Kaul, Manohar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [45] Pareto Self-Supervised Training for Few-Shot Learning
    Chen, Zhengyu
    Ge, Jixie
    Zhan, Heshen
    Huang, Siteng
    Wang, Donglin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13658 - 13667
  • [46] SELF-ATTENTION RELATION NETWORK FOR FEW-SHOT LEARNING
    Hui, Binyuan
    Zhu, Pengfei
    Hu, Qinghua
    Wang, Qilong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 198 - 203
  • [47] Boosting Few-Shot Visual Learning with Self-Supervision
    Gidaris, Spyros
    Bursuc, Andrei
    Komodakis, Nikos
    Perez, Patrick
    Cord, Matthieu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8058 - 8067
  • [48] SELF-SUPERVISED LEARNING FOR FEW-SHOT IMAGE CLASSIFICATION
    Chen, Da
    Chen, Yuefeng
    Li, Yuhong
    Mao, Feng
    He, Yuan
    Xue, Hui
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1745 - 1749
  • [49] Progressive Network Grafting for Few-Shot Knowledge Distillation
    Shen, Chengchao
    Wang, Xinchao
    Yin, Youtan
    Song, Jie
    Luo, Sihui
    Song, Mingli
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 2541 - 2549
  • [50] Black-Box Few-Shot Knowledge Distillation
    Dang Nguyen
    Gupta, Sunil
    Do, Kien
    Venkatesh, Svetha
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 196 - 211