SAPENet: Self-Attention based Prototype Enhancement Network for Few-shot Learning

被引:35
|
作者
Huang, Xilang [1 ]
Choi, Seon Han [2 ,3 ]
机构
[1] Pukyong Natl Univ, Dept Artificial Intelligent Convergence, Pusan 48513, South Korea
[2] Ewha Womans Univ, Dept Elect & Elect Engn, Seoul 03760, South Korea
[3] Ewha Womans Univ, Grad Program Smart Factory, Seoul 03760, South Korea
基金
新加坡国家研究基金会;
关键词
Few -shot learning; Multi -head self -attention mechanism; Image classification; k -Nearest neighbor;
D O I
10.1016/j.patcog.2022.109170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning considers the problem of learning unseen categories given only a few labeled samples. As one of the most popular few-shot learning approaches, Prototypical Networks have received considerable attention owing to their simplicity and efficiency. However, a class prototype is typically obtained by averaging a few labeled samples belonging to the same class, which treats the samples as equally important and is thus prone to learning redundant features. Herein, we propose a self-attention based prototype enhancement network (SAPENet) to obtain a more representative prototype for each class. SAPENet utilizes multi-head self-attention mechanisms to selectively augment discriminative features in each sample feature map, and generates channel attention maps between intra-class sample features to attentively retain informative channel features for that class. The augmented feature maps and attention maps are finally fused to obtain representative class prototypes. Thereafter, a local descriptor-based metric module is employed to fully exploit the channel information of the prototypes by searching k similar local descriptors of the prototype for each local descriptor in the unlabeled samples for classification. We performed experiments on multiple benchmark datasets: miniImageNet, tieredImageNet, and CUB-200-2011. The experimental results on these datasets show that SAPENet achieves a considerable improvement compared to Prototypical Networks and also outperforms related state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING
    Ke, Li
    Pan, Meng
    Wen, Weigao
    Li, Dong
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 2233 - 2237
  • [32] Few-Shot Learning of Signal Modulation Recognition based on Attention Relation Network
    Zhang, Zilin
    Li, Yan
    Gao, Meiguo
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 1372 - 1376
  • [33] A Dual Attention Network with Semantic Embedding for Few-Shot Learning
    Yan, Shipeng
    Zhang, Songyang
    He, Xuming
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9079 - 9086
  • [34] Intermediate prototype network for few-shot segmentation
    Luo, Xiaoliu
    Duan, Zhao
    Zhang, Taiping
    SIGNAL PROCESSING, 2023, 203
  • [35] Local descriptor-based multi-prototype network for few-shot Learning
    Huang, Hongwei
    Wu, Zhangkai
    Li, Wenbin
    Huo, Jing
    Gao, Yang
    PATTERN RECOGNITION, 2021, 116
  • [36] Unsupervised prototype self-calibration based on hybrid attention contrastive learning for enhanced few-shot action recognition
    An, Yiyuan
    Yi, Yingmin
    Wu, Li
    Cao, Yuan
    Zhou, Dingsong
    Yuan, Yiwei
    Liu, Bojun
    Xue, Xianghong
    Li, Yankai
    Su, Chunyi
    APPLIED SOFT COMPUTING, 2025, 168
  • [37] Meta-Learning based prototype-relation network for few-shot classification
    Liu, Xiaoqian
    Zhou, Fengyu
    Liu, Jin
    Jiang, Lianjie
    NEUROCOMPUTING, 2020, 383 : 224 - 234
  • [38] Few-shot learning with representative global prototype
    Liu, Yukun
    Shi, Daming
    Lin, Hexiu
    NEURAL NETWORKS, 2024, 180
  • [39] Multi-scale feature self-enhancement network for few-shot learning
    Dong, Bowen
    Wang, Ronggui
    Yang, Juan
    Xue, Lixia
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (25) : 33865 - 33883
  • [40] Multi-scale feature self-enhancement network for few-shot learning
    Bowen Dong
    Ronggui Wang
    Juan Yang
    Lixia Xue
    Multimedia Tools and Applications, 2021, 80 : 33865 - 33883