Learning Modality-Invariant Latent Representations for Generalized Zero-shot Learning

被引:25
|
作者
Li, Jingjing [1 ]
Jing, Mengmeng [1 ]
Zhu, Lei [2 ]
Ding, Zhengming [3 ]
Lu, Ke [1 ]
Yang, Yang [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[2] Shandong Normal Univ, Jinan, Shandong, Peoples R China
[3] Indiana Univ Purdue Univ, Indianapolis, IN 46202 USA
基金
中国国家自然科学基金;
关键词
Zero-shot learning; mutual information estimation; generalized ZSL; variational autoencoders;
D O I
10.1145/3394171.3413503
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, feature generating methods have been successfully applied to zero-shot learning (ZSL). However, most previous approaches only generate visual representations for zero-shot recognition. In fact, typical ZSL is a classic multi-modal learning protocol which consists of a visual space and a semantic space. In this paper, therefore, we present a new method which can simultaneously generate both visual representations and semantic representations so that the essential multi-modal information associated with unseen classes can be captured. Specifically, we address the most challenging issue in such a paradigm, i.e., how to handle the domain shift and thus guarantee that the learned representations are modality-invariant. To this end, we propose two strategies: 1) leveraging the mutual information between the latent visual representations and the semantic representations; 2) maximizing the entropy of the joint distribution of the two latent representations. By leveraging the two strategies, we argue that the two modalities can be well aligned. At last, extensive experiments on five widely used datasets verify that the proposed method is able to significantly outperform previous the state-of-the-arts.
引用
收藏
页码:1348 / 1356
页数:9
相关论文
共 50 条
  • [31] Zero-shot learning with regularized cross-modality ranking
    Yu, Yunlong
    Ji, Zhong
    Guo, Jichang
    Pang, Yanwei
    NEUROCOMPUTING, 2017, 259 : 14 - 20
  • [32] Infrared colorization with cross-modality zero-shot learning
    Wei, Chiheng
    Chen, Huawei
    Bai, Lianfa
    Han, Jing
    Chen, Xiaoyu
    NEUROCOMPUTING, 2024, 579
  • [33] Synthetic Sample Selection for Generalized Zero-Shot Learning
    Gowda, Shreyank N.
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 58 - 67
  • [34] Discriminative comparison classifier for generalized zero-shot learning
    Hou, Mingzhen
    Xia, Wei
    Zhang, Xiangdong
    Gao, Quanxue
    NEUROCOMPUTING, 2020, 414 (414) : 10 - 17
  • [35] Generalized Zero-Shot Learning with Noisy Labeled Data
    Xu, Liqing
    Liu, Xueliang
    Jiang, Yishun
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI, 2024, 14435 : 289 - 300
  • [36] Transferable Contrastive Network for Generalized Zero-Shot Learning
    Jiang, Huajie
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9764 - 9773
  • [37] Discriminative deep attributes for generalized zero-shot learning
    Kim, Hoseong
    Lee, Jewook
    Byun, Hyeran
    PATTERN RECOGNITION, 2022, 124
  • [38] Class-Incremental Generalized Zero-Shot Learning
    Zhenfeng Sun
    Rui Feng
    Yanwei Fu
    Multimedia Tools and Applications, 2023, 82 : 38233 - 38247
  • [39] Data-Free Generalized Zero-Shot Learning
    Tang, Bowen
    Zhang, Jing
    Yan, Long
    Yu, Qian
    Sheng, Lu
    Xu, Dong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5108 - 5117
  • [40] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Zongyan Han
    Zhenyong Fu
    Shuo Chen
    Jian Yang
    International Journal of Computer Vision, 2022, 130 : 2606 - 2622