Enhanced ProtoNet With Self-Knowledge Distillation for Few-Shot Learning

被引:0
|
作者
Habib, Mohamed El Hacen [1 ]
Kucukmanisa, Ayhan [1 ]
Urhan, Oguzhan [1 ]
机构
[1] Kocaeli Univ, Dept Elect & Telecommun Engn, TR-410001 Kocaeli, Turkiye
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Prototypes; Computational modeling; Training; Predictive models; Few shot learning; Adaptation models; Data models; Feature extraction; Training data; Metalearning; Few-shot learning; meta-learning; prototypical networks; self-knowledge distillation;
D O I
10.1109/ACCESS.2024.3472530
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Few-Shot Learning (FSL) has recently gained increased attention for its effectiveness in addressing the problem of data scarcity. Many approaches have been proposed based on the FSL idea, including prototypical networks (ProtoNet). ProtoNet demonstrates its effectiveness in overcoming this issue while providing simplicity in its architecture. On the other hand, the self-knowledge distillation (SKD) technique has become popular in assisting FSL models in achieving good performance by transferring knowledge gained from additional training data. In this work, we apply the self-knowledge distillation technique to ProtoNet to boost its performance. For each task, we compute the prototypes from the few examples (local prototypes) and the many examples (global prototypes) and use the global prototypes to distill knowledge to the few-shot learner model. We employ different distillation techniques based on prototypes, logits, and predictions (soft labels). We evaluated our method using three popular FSL image classification benchmark datasets: CIFAR-FS, CIFAR-FC100, and miniImageNet. Our approach outperformed the baseline and achieved competitive results compared to the state-of-the-art methods, especially on the CIFAR-FC100 dataset.
引用
收藏
页码:145331 / 145340
页数:10
相关论文
共 50 条
  • [21] SRL-ProtoNet: Self-supervised representation learning for few-shot remote sensing scene classification
    Liu, Bing
    Zhao, Hongwei
    Li, Jiao
    Gao, Yansheng
    Zhang, Jianrong
    IET COMPUTER VISION, 2024, 18 (07) : 1034 - 1042
  • [22] Self-Distillation for Few-Shot Image Captioning
    Chen, Xianyu
    Jiang, Ming
    Zhao, Qi
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 545 - 555
  • [23] Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification
    Liu, Jinshuo
    Yang, Lu
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (04)
  • [24] Knowledge Graph enhanced Multimodal Learning for Few-shot Visual Recognition
    Han, Mengya
    Zhan, Yibing
    Yu, Baosheng
    Luo, Yong
    Du, Bo
    Tao, Dacheng
    2022 IEEE 24TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2022,
  • [25] EKD: Effective Knowledge Distillation for Few-Shot Sentiment Analysis
    Jiang, Kehan
    Cal, Hongtian
    Lv, Yingda
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 164 - 176
  • [26] Generalized Few-Shot Node Classification With Graph Knowledge Distillation
    Wang, Jialong
    Zhou, Mengting
    Zhang, Shilong
    Gong, Zhiguo
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024,
  • [27] Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning
    Cheraghian, Ali
    Rahman, Shafin
    Fang, Pengfei
    Roy, Soumava Kumar
    Petersson, Lars
    Harandi, Mehrtash
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2534 - 2543
  • [28] Dynamic Knowledge Path Learning for Few-Shot Learning
    Li, Jingzhu
    Yin, Zhe
    Yang, Xu
    Jiao, Jianbin
    Ding, Ye
    BIG DATA MINING AND ANALYTICS, 2025, 8 (02): : 479 - 495
  • [29] Adaptive Learning Knowledge Networks for Few-Shot Learning
    Yan, Minghao
    IEEE ACCESS, 2019, 7 : 119041 - 119051
  • [30] Enhancing Few-Shot Learning in Lightweight Models via Dual-Faceted Knowledge Distillation
    Zhou, Bojun
    Cheng, Tianyu
    Zhao, Jiahao
    Yan, Chunkai
    Jiang, Ling
    Zhang, Xinsong
    Gu, Juping
    SENSORS, 2024, 24 (06)