Attribute- and attention-guided few-shot classification

被引:0
|
作者
Ziquan Wang
Hui Li
Zikai Zhang
Feng Chen
Jia Zhai
机构
[1] National Key Laboratory of Scattering and Radiation,
[2] Tsinghua University,undefined
[3] Dalian University of Technology,undefined
来源
Multimedia Systems | 2024年 / 30卷
关键词
Few-shot learning; Attribute; Attention mechanism; Image classification;
D O I
暂无
中图分类号
学科分类号
摘要
The field of image classification faces significant challenges due to the scarcity of target samples, leading to model overfitting and difficult training. To address these issues, few-shot learning has emerged as a promising approach. However, current methods do not fully utilize the correlations among samples and external semantic information, resulting in poor recognition accuracy. To overcome these limitations, we propose a new few-shot classification method that incorporates both attributes and attention guided approach. The method leverages the attention mechanism to extract discriminative features from the images. By exploring regional correlations among samples, it assists in generating visual representations by utilizing predicted attribute features. As a result, accurate prototypes are generated. Extensive experiments were conducted on two attribute-labeled datasets, namely Caltech-UCSD Birds-200–2011(CUB) and SUN Attribute Database (SUN) Attribute Dataset. With the Resnet12 backbone, the method achieves remarkable accuracies of 79.95% and 89.34% for 1-shot and 5-shot, respectively, on the CUB dataset. Similarly, with the Conv4 backbone, the method achieves notable accuracies of 67.21% and 80.87% for 1-shot and 5-shot, respectively, on the SUN Attribute dataset. The achieved accuracies highlight the robustness and generalizability of our method, and show the capability of our method to accurately classify samples with limited training data, which is a significant advantage in real-world scenarios where labeled data are often scarce.
引用
收藏
相关论文
共 50 条
  • [41] SELF-ADAPTIVE EMBEDDING FOR FEW-SHOT CLASSIFICATION BY HIERARCHICAL ATTENTION
    Wang, Xueliang
    Wu, Feng
    Wang, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [42] Cross-channel spatial attention network for few-shot classification
    Jia, Yunwei
    Wang, Chao
    Zhu, Wanshan
    Lu, Keke
    Wang, Tianyang
    Yu, Kaiying
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
  • [43] LPN: Language-Guided Prototypical Network for Few-Shot Classification
    Cheng, Kaihui
    Yang, Chule
    Liu, Xiao
    Guan, Naiyang
    Wang, Zhiyuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (01) : 632 - 642
  • [44] Optimization model based on attention mechanism for few-shot image classification
    Ruizhi Liao
    Junhai Zhai
    Feng Zhang
    Machine Vision and Applications, 2024, 35
  • [45] TAAN: Task-Aware Attention Network for Few-shot Classification
    Wang, Zhe
    Liu, Li
    Li, FanZhang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9130 - 9136
  • [46] Knowledge-Guided Semantics Adjustment for Improved Few-Shot Classification
    Zheng, Guangtao
    Zhang, Aidong
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 1347 - 1352
  • [47] Knowledge-Guided Prompt Learning for Few-Shot Text Classification
    Wang, Liangguo
    Chen, Ruoyu
    Li, Li
    ELECTRONICS, 2023, 12 (06)
  • [48] SEMANTICS-GUIDED DATA HALLUCINATION FOR FEW-SHOT VISUAL CLASSIFICATION
    Lin, Chia-Ching
    Wang, Yu-Chiang Frank
    Lei, Chin-Laung
    Chen, Kuan-Ta
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3302 - 3306
  • [49] Attribute-guided Dynamic Routing Graph Network for Transductive Few-shot Learning
    Chen, Chaofan
    Yang, Xiaoshan
    Yan, Ming
    Xu, Changsheng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6259 - 6268
  • [50] MASK-GUIDED ATTENTION AND EPISODE ADAPTIVE WEIGHTS FOR FEW-SHOT SEGMENTATION
    Kwon, Hyeongjun
    Song, Taeyong
    Kim, Sunok
    Sohn, Kwanghoon
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2611 - 2615