Learning unseen visual prototypes for zero-shot classification

被引:18
|
作者
Li, Xiao [1 ]
Fang, Min [1 ]
Feng, Dazheng [2 ]
Li, Haikun [1 ]
Wu, Jinqiao [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Zero-shot classification; Unseen visual prototypes; Semantic correlation; Hubness; Domain shift; RECOGNITION;
D O I
10.1016/j.knosys.2018.06.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The number of object classes is increasing rapidly leading to the recognition of new classes difficult. Zero-shot learning aims to predict the labels of the new class samples by using the seen class samples and their semantic representations. In this paper, we propose a simple method to learn the unseen visual prototypes (LUVP) by learning the projection function from semantic space to visual feature space to reduce hubness problem. We exploit the class level samples rather than instance level samples, which can alleviate expensive computational costs. Since the disjointness of seen and unseen classes, directly applying the projection function to unseen samples will cause a domain shift problem. Thus, we preserve the unseen label semantic correlations and then adjust the unseen visual prototypes to minimize the domain shift problem. We demonstrate through extensive experiments that the proposed method (1) alleviates the hubness problem, (2) overcomes the domain shift problem, and (3) significantly outperforms existing methods for zero-shot classification on five benchmark datasets.
引用
收藏
页码:176 / 187
页数:12
相关论文
共 50 条
  • [1] Zero-shot classification with unseen prototype learning
    Zhong Ji
    Biying Cui
    Yunlong Yu
    Yanwei Pang
    Zhongfei Zhang
    Neural Computing and Applications, 2023, 35 : 12307 - 12317
  • [2] Zero-shot classification with unseen prototype learning
    Ji, Zhong
    Cui, Biying
    Yu, Yunlong
    Pang, Yanwei
    Zhang, Zhongfei
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (17): : 12307 - 12317
  • [3] From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis
    Long, Yang
    Liu, Li
    Shao, Ling
    Shen, Fumin
    Ding, Guiguang
    Han, Jungong
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6165 - 6174
  • [4] Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning
    Changpinyo, Soravit
    Chao, Wei-Lun
    Sha, Fei
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3496 - 3505
  • [5] Adversarial unseen visual feature synthesis for Zero-shot Learning
    Zhang, Haofeng
    Long, Yang
    Liu, Li
    Shao, Ling
    NEUROCOMPUTING, 2019, 329 : 12 - 20
  • [6] Enhancing Zero-Shot Learning Through Kernelized Visual Prototypes and Similarity Learning
    Cheng, Kanglong
    Fang, Bowen
    MATHEMATICS, 2025, 13 (03)
  • [7] Learning domain invariant unseen features for generalized zero-shot classification
    Li, Xiao
    Fang, Min
    Li, Haikun
    Wu, Jinqiao
    KNOWLEDGE-BASED SYSTEMS, 2020, 206
  • [8] Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation
    Long, Yang
    Liu, Li
    Shen, Fumin
    Shao, Ling
    Li, Xuelong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (10) : 2498 - 2512
  • [9] Towards Visual Explainable Active Learning for Zero-Shot Classification
    Jia, Shichao
    Li, Zeyu
    Chen, Nuo
    Zhang, Jiawan
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (01) : 791 - 801
  • [10] Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective
    Li, Kai
    Min, Martin Renqiang
    Fu, Yun
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3582 - 3591