Self Supervision to Distillation for Long-Tailed Visual Recognition

被引:37
|
作者
Li, Tianhao [1 ]
Wang, Limin [1 ]
Wu, Gangshan [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
SMOTE;
D O I
10.1109/ICCV48922.2021.00067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved remarkable progress for visual recognition on large-scale balanced datasets but still performs poorly on real-world long-tailed data. Previous methods often adopt class re-balanced training strategies to effectively alleviate the imbalance issue, but might be a risk of over-fitting tail classes. The recent decoupling method overcomes over-fitting issues by using a multi-stage training scheme, yet, it is still incapable of capturing tail class information in the feature learning stage. In this paper, we show that soft label can serve as a powerful solution to incorporate label correlation into a multi-stage training scheme for long-tailed recognition. The intrinsic relation between classes embodied by soft labels turns out to be helpful for long-tailed recognition by transferring knowledge from head to tail classes. Specifically, we propose a conceptually simple yet particularly effective multi-stage training scheme, termed as Self Supervised to Distillation (SSD). This scheme is composed of two parts. First, we introduce a self-distillation framework for long-tailed recognition, which can mine the label relation automatically. Second, we present a new distillation label generation module guided by self-supervision. The distilled labels integrate information from both label and data domains that can model long-tailed distribution effectively. We conduct extensive experiments and our method achieves the state-of-the-art results on three long-tailed recognition benchmarks: ImageNet-LT, CIFAR100-LT and iNaturalist 2018. Our SSD outperforms the strong LWS baseline by from 2.7% to 4.5% on various datasets.
引用
收藏
页码:610 / 619
页数:10
相关论文
共 50 条
  • [1] Balanced self-distillation for long-tailed recognition
    Ren, Ning
    Li, Xiaosong
    Wu, Yanxia
    Fu, Yan
    KNOWLEDGE-BASED SYSTEMS, 2024, 290
  • [2] A Survey on Long-Tailed Visual Recognition
    Yang, Lu
    Jiang, He
    Song, Qing
    Guo, Jun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (07) : 1837 - 1872
  • [3] A Survey on Long-Tailed Visual Recognition
    Lu Yang
    He Jiang
    Qing Song
    Jun Guo
    International Journal of Computer Vision, 2022, 130 : 1837 - 1872
  • [4] One-stage self-distillation guided knowledge transfer for long-tailed visual recognition
    Xia, Yuelong
    Zhang, Shu
    Wang, Jun
    Zou, Wei
    Zhou, Juxiang
    Wen, Bin
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 11893 - 11908
  • [5] Decoupled Optimisation for Long-Tailed Visual Recognition
    Cong, Cong
    Xuan, Shiyu
    Liu, Sidong
    Zhang, Shiliang
    Pagnucco, Maurice
    Song, Yang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1380 - 1388
  • [6] MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition
    Zhao, Qihao
    Jiang, Chen
    Hu, Wei
    Zhang, Fan
    Liu, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11563 - 11574
  • [7] Virtual Student Distribution Knowledge Distillation for Long-Tailed Recognition
    Liu, Haodong
    Huang, Xinlei
    Tang, Jialiang
    Jiang, Ning
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT IV, 2025, 15034 : 406 - 419
  • [8] Feature fusion network for long-tailed visual recognition
    Zhou, Xuesong
    Zhai, Junhai
    Cao, Yang
    PATTERN RECOGNITION, 2023, 144
  • [9] Attentive Feature Augmentation for Long-Tailed Visual Recognition
    Wang, Weiqiu
    Zhao, Zhicheng
    Wang, Pingyu
    Su, Fei
    Meng, Hongying
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) : 5803 - 5816
  • [10] Disentangling Label Distribution for Long-tailed Visual Recognition
    Hong, Youngkyu
    Han, Seungju
    Choi, Kwanghee
    Seo, Seokjun
    Kim, Beomsu
    Chang, Buru
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6622 - 6632