Multidomain Features Fusion for Zero-Shot Learning

被引:4
|
作者
Liu, Zhihao [1 ,2 ]
Zeng, Zhigang [1 ,2 ]
Lian, Cheng [3 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Automat, Wuhan 430074, Peoples R China
[2] Educ Minist China, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Hubei, Peoples R China
[3] Wuhan Univ Technol, Sch Automat, Wuhan 430074, Peoples R China
关键词
Image classification; image retrieval; semantics; transfer learning; zero-shot learning;
D O I
10.1109/TETCI.2018.2868061
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given a novel class instance, the purpose of zero-shot learning (ZSL) is to learn a model to classify the instance by seen samples and semantic information transcending class boundaries. The difficulty lies in how to find a suitable space for zero-shot recognition. The previous approaches use semantic space or visual space as classification space. These methods, which typically learn visual-semantic or semantic-visual mapping and directly exploit the output of the mapping function to measure similarity to classify new categories, do not adequately consider the complementarity and distribution gap of multiple domain information. In this paper, we propose to learn a multidomain information fusion space by a joint learning framework. Specifically, we consider the fusion space as a shared space in which different domain features can be recovered by simple linear transformation. By learning a n-way classifier of fusion space from the seen class samples, we also obtain the discriminative information of the similarity space to make the fusion representation more separable. Extensive experiments on popular benchmark datasets manifest that our approach achieves state-of-the-art performances in both supervised and unsupervised ZSL tasks.
引用
收藏
页码:764 / 773
页数:10
相关论文
共 50 条
  • [41] Synthesizing Samples for Zero-shot Learning
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Gao, Yue
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1774 - 1780
  • [42] Research and Development on Zero-Shot Learning
    Zhang L.-N.
    Zuo X.
    Liu J.-W.
    Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (01): : 1 - 23
  • [43] Evolutionary Generalized Zero-Shot Learning
    Chen, Dubing
    Jiang, Chenyi
    Zhang, Haofeng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 632 - 640
  • [44] Towards Open Zero-Shot Learning
    Marmoreo, Federico
    Carrazco, Julio Ivan Davila
    Cavazza, Jacopo
    Murino, Vittorio
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II, 2022, 13232 : 564 - 575
  • [45] Semantic Autoencoder for Zero-Shot Learning
    Kodirov, Elyor
    Xiang, Tao
    Gong, Shaogang
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4447 - 4456
  • [46] Variational Disentangle Zero-Shot Learning
    Su, Jie
    Wan, Jinhao
    Li, Taotao
    Li, Xiong
    Ye, Yuheng
    MATHEMATICS, 2023, 11 (16)
  • [47] Zero-Shot Program Representation Learning
    Cui, Nan
    Jiang, Yuze
    Gu, Xiaodong
    Shen, Beijun
    arXiv, 2022,
  • [48] Zero-shot Learning With Fuzzy Attribute
    Liu, Chongwen
    Shang, Zhaowei
    Tang, Yuan Yan
    2017 3RD IEEE INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF), 2017, : 277 - 282
  • [49] Prototype rectification for zero-shot learning
    Yi, Yuanyuan
    Zeng, Guolei
    Ren, Bocheng
    Yang, Laurence T.
    Chai, Bin
    Li, Yuxin
    PATTERN RECOGNITION, 2024, 156
  • [50] Detecting Errors with Zero-Shot Learning
    Wu, Xiaoyu
    Wang, Ning
    ENTROPY, 2022, 24 (07)