Learning unseen visual prototypes for zero-shot classification

被引:18
|
作者
Li, Xiao [1 ]
Fang, Min [1 ]
Feng, Dazheng [2 ]
Li, Haikun [1 ]
Wu, Jinqiao [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Zero-shot classification; Unseen visual prototypes; Semantic correlation; Hubness; Domain shift; RECOGNITION;
D O I
10.1016/j.knosys.2018.06.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The number of object classes is increasing rapidly leading to the recognition of new classes difficult. Zero-shot learning aims to predict the labels of the new class samples by using the seen class samples and their semantic representations. In this paper, we propose a simple method to learn the unseen visual prototypes (LUVP) by learning the projection function from semantic space to visual feature space to reduce hubness problem. We exploit the class level samples rather than instance level samples, which can alleviate expensive computational costs. Since the disjointness of seen and unseen classes, directly applying the projection function to unseen samples will cause a domain shift problem. Thus, we preserve the unseen label semantic correlations and then adjust the unseen visual prototypes to minimize the domain shift problem. We demonstrate through extensive experiments that the proposed method (1) alleviates the hubness problem, (2) overcomes the domain shift problem, and (3) significantly outperforms existing methods for zero-shot classification on five benchmark datasets.
引用
收藏
页码:176 / 187
页数:12
相关论文
共 50 条
  • [21] Generalised Zero-Shot Learning with Domain Classification in a Joint Semantic and Visual Space
    Felix, Rafael
    Harwood, Ben
    Sasdelli, Michele
    Carneiro, Gustavo
    2019 DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2019, : 17 - 24
  • [22] Learning visual-and-semantic knowledge embedding for zero-shot image classification
    Dehui Kong
    Xiliang Li
    Shaofan Wang
    Jinghua Li
    Baocai Yin
    Applied Intelligence, 2023, 53 : 2250 - 2264
  • [23] Learning visual-and-semantic knowledge embedding for zero-shot image classification
    Kong, Dehui
    Li, Xiliang
    Wang, Shaofan
    Li, Jinghua
    Yin, Baocai
    APPLIED INTELLIGENCE, 2023, 53 (02) : 2250 - 2264
  • [24] Generalized zero-shot learning for classifying unseen wafer map patterns
    Kim, Han Kyul
    Shim, Jaewoong
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [25] Learning Class Prototypes via Structure Alignment for Zero-Shot Recognition
    Jiang, Huajie
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 121 - 138
  • [26] Infer unseen from seen: Relation regularized zero-shot visual dialog
    Zhang, Zefan
    Li, Shun
    Ji, Yi
    Liu, Chunping
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 97
  • [27] Generalized zero-shot classification via iteratively generating and selecting unseen samples
    Li, Xiao
    Fang, Min
    Chen, Bo
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 92
  • [28] Learning Invariant Visual Representations for Compositional Zero-Shot Learning
    Zhang, Tian
    Liang, Kongming
    Du, Ruoyi
    Sun, Xian
    Ma, Zhanyu
    Guo, Jun
    COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 339 - 355
  • [29] Zero-shot recognition with latent visual attributes learning
    Xie, Yurui
    He, Xiaohai
    Zhang, Jing
    Luo, Xiaodong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (37-38) : 27321 - 27335
  • [30] Joint Visual and Semantic Optimization for zero-shot learning
    Wu, Hanrui
    Yan, Yuguang
    Chen, Sentao
    Huang, Xiangkang
    Wu, Qingyao
    Ng, Michael K.
    KNOWLEDGE-BASED SYSTEMS, 2021, 215 (215)