Multidomain Features Fusion for Zero-Shot Learning

被引:4
|
作者
Liu, Zhihao [1 ,2 ]
Zeng, Zhigang [1 ,2 ]
Lian, Cheng [3 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Automat, Wuhan 430074, Peoples R China
[2] Educ Minist China, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Hubei, Peoples R China
[3] Wuhan Univ Technol, Sch Automat, Wuhan 430074, Peoples R China
关键词
Image classification; image retrieval; semantics; transfer learning; zero-shot learning;
D O I
10.1109/TETCI.2018.2868061
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given a novel class instance, the purpose of zero-shot learning (ZSL) is to learn a model to classify the instance by seen samples and semantic information transcending class boundaries. The difficulty lies in how to find a suitable space for zero-shot recognition. The previous approaches use semantic space or visual space as classification space. These methods, which typically learn visual-semantic or semantic-visual mapping and directly exploit the output of the mapping function to measure similarity to classify new categories, do not adequately consider the complementarity and distribution gap of multiple domain information. In this paper, we propose to learn a multidomain information fusion space by a joint learning framework. Specifically, we consider the fusion space as a shared space in which different domain features can be recovered by simple linear transformation. By learning a n-way classifier of fusion space from the seen class samples, we also obtain the discriminative information of the similarity space to make the fusion representation more separable. Extensive experiments on popular benchmark datasets manifest that our approach achieves state-of-the-art performances in both supervised and unsupervised ZSL tasks.
引用
收藏
页码:764 / 773
页数:10
相关论文
共 50 条
  • [31] Joint Dictionaries for Zero-Shot Learning
    Kolouri, Soheil
    Rostami, Mohammad
    Owechko, Yuri
    Kim, Kyungnam
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3431 - 3439
  • [32] Research progress of zero-shot learning
    Xiaohong Sun
    Jinan Gu
    Hongying Sun
    Applied Intelligence, 2021, 51 : 3600 - 3614
  • [33] Creativity Inspired Zero-Shot Learning
    Elhoseiny, Mohamed
    Elfeki, Mohamed
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5783 - 5792
  • [34] Zero-Shot Learning With Transferred Samples
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Gao, Yue
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (07) : 3277 - 3290
  • [35] Synthesized Classifiers for Zero-Shot Learning
    Changpinyo, Soravit
    Chao, Wei-Lun
    Gong, Boqing
    Sha, Fei
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5327 - 5336
  • [36] LVQ Treatment for Zero-Shot Learning
    Ismailoglu, Firat
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2023, 31 (01) : 216 - 237
  • [37] Attribute subspaces for zero-shot learning
    Zhou, Lei
    Liu, Yang
    Bai, Xiao
    Li, Na
    Yu, Xiaohan
    Zhou, Jun
    Hancock, Edwin R.
    PATTERN RECOGNITION, 2023, 144
  • [38] A review on multimodal zero-shot learning
    Cao, Weipeng
    Wu, Yuhao
    Sun, Yixuan
    Zhang, Haigang
    Ren, Jin
    Gu, Dujuan
    Wang, Xingkai
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 13 (02)
  • [39] Zero-Shot Learning with Attribute Selection
    Guo, Yuchen
    Ding, Guiguang
    Han, Jungong
    Tang, Sheng
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6870 - 6877
  • [40] Zero-Shot Compositional Concept Learning
    Xu, Guangyue
    Kordjamshidi, Parisa
    Chai, Joyce Y.
    1ST WORKSHOP ON META LEARNING AND ITS APPLICATIONS TO NATURAL LANGUAGE PROCESSING (METANLP 2021), 2021, : 19 - 27