NC2E: boosting few-shot learning with novel class center estimation

被引:0
|
作者
Wu, Zheng [1 ,2 ]
Shen, Changchun [2 ]
Guo, Kehua [2 ]
Luo, Entao [1 ]
Wang, Liwei [2 ]
机构
[1] Hunan Univ Sci & Engn, Sch Informat Engn, Yongzhou 425199, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 09期
关键词
Few-shot learning; Object recognition; Class distribution estimation; Similar class classification;
D O I
10.1007/s00521-022-08080-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate class distribution estimation is expected to solve the problem of the poor generalization ability that exists in few-shot learning models due to data shortages. However, the reliability of class distributions estimates based on limited samples and knowledge is questionable, especially for similar classes. We find that the distribution calibration method is inaccurate in estimating similar classes due to limited knowledge being reused through double-validation experiments. To address this issue, we propose a novel class center estimation ((NCE)-E-2) method, which consists of a two-stage center estimation (TCE) algorithm and a class centroid estimation (CCE) algorithm. The class centers estimated by TCE in two stages are closer to the truth, and its superiority is demonstrated by error theory. CCE searches for the centroid of the base class iteratively and is used as the basis for the novel class calibration. Sufficient simulation samples are generated based on the estimated class distribution to augment the training data. The experimental results show that, compared with the distribution calibration method, the proposed method achieves an approximately 1% performance improvement on the miniImageNet and CUB datasets; an approximately 1.45% performance improvement for similar class classification; and an approximately 6.06% performance improvement for non-similar class classification.
引用
收藏
页码:7049 / 7062
页数:14
相关论文
共 50 条
  • [31] On the Approximation Risk of Few-Shot Class-Incremental Learning
    Wang, Xuan
    Ji, Zhong
    Liu, Xiyao
    Pang, Yanwei
    Han, Jungong
    COMPUTER VISION - ECCV 2024, PT LI, 2025, 15109 : 162 - 178
  • [32] Pseudo initialization based Few-Shot Class Incremental Learning
    Shao, Mingwen
    Zhuang, Xinkai
    Zhang, Lixu
    Zuo, Wangmeng
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 247
  • [33] Meta Learning for Few-Shot One-Class Classification
    Dahia, Gabriel
    Segundo, Mauricio Pamplona
    AI, 2021, 2 (02) : 195 - 208
  • [34] NTK-Guided Few-Shot Class Incremental Learning
    Liu, Jingren
    Ji, Zhong
    Pang, Yanwei
    Yu, Yunlong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6029 - 6044
  • [35] Dual class representation learning for few-shot image classification
    Singh, Pravendra
    Mazumder, Pratik
    KNOWLEDGE-BASED SYSTEMS, 2022, 238
  • [36] Few-Shot Class Incremental Learning with Generative Feature Replay
    Shankarampeta, Abhilash Reddy
    Yamauchi, Koichiro
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 259 - 267
  • [37] Rethinking Few-Shot Class-Incremental Learning: Learning from Yourself
    Tang, Yu-Ming
    Peng, Yi-Xing
    Meng, Jingke
    Zheng, Wei-Shi
    COMPUTER VISION - ECCV 2024, PT LXI, 2025, 15119 : 108 - 128
  • [38] Few-Shot Class-Incremental Learning Based on Feature Distribution Learning
    Yao, Guangle
    Zhu, Juntao
    Zhou, Wenlong
    Zhang, Guiyu
    Zhang, Wei
    Zhang, Qian
    Computer Engineering and Applications, 2023, 59 (14) : 151 - 157
  • [39] Rethinking few-shot class-incremental learning: A lazy learning baseline
    Qin, Zhili
    Han, Wei
    Liu, Jiaming
    Zhang, Rui
    Yang, Qingli
    Sun, Zejun
    Shao, Junming
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 250
  • [40] Analogical Learning-Based Few-Shot Class-Incremental Learning
    Li, Jiashuo
    Dong, Songlin
    Gong, Yihong
    He, Yuhang
    Wei, Xing
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5493 - 5504