Semantic Consistent Embedding for Domain Adaptive Zero-Shot Learning

被引:6
|
作者
Zhang, Jianyang [1 ]
Yang, Guowu [1 ]
Hu, Ping [2 ]
Lin, Guosheng [3 ]
Lv, Fengmao [4 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Sichuan, Peoples R China
[2] Boston Univ, Comp Sci Dept, Boston, MA 02215 USA
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[4] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 611756, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Zero-shot learning; unsupervised domain adaptation; transfer learning;
D O I
10.1109/TIP.2023.3293769
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation has limitations when encountering label discrepancy between the source and target domains. While open-set domain adaptation approaches can address situations when the target domain has additional categories, these methods can only detect them but not further classify them. In this paper, we focus on a more challenging setting dubbed Domain Adaptive Zero-Shot Learning (DAZSL), which uses semantic embeddings of class tags as the bridge between seen and unseen classes to learn the classifier for recognizing all categories in the target domain when only the supervision of seen categories in the source domain is available. The main challenge of DAZSL is to perform knowledge transfer across categories and domain styles simultaneously. To this end, we propose a novel end-to-end learning mechanism dubbed Three-way Semantic Consistent Embedding (TSCE) to embed the source domain, target domain, and semantic space into a shared space. Specifically, TSCE learns domain-irrelevant categorical prototypes from the semantic embedding of class tags and uses them as the pivots of the shared space. The source domain features are aligned with the prototypes via their supervised information. On the other hand, the mutual information maximization mechanism is introduced to push the target domain features and prototypes towards each other. By this way, our approach can align domain differences between source and target images, as well as promote knowledge transfer towards unseen classes. Moreover, as there is no supervision in the target domain, the shared space may suffer from the catastrophic forgetting problem. Hence, we further propose a ranking-based embedding alignment mechanism to maintain the consistency between the semantic space and the shared space. Experimental results on both I2AwA and I2WebV clearly validate the effectiveness of our method. Code is available at https://github.com/tiggers23/TSCE-Domain-Adaptive-Zero-Shot-Learning.
引用
收藏
页码:4024 / 4035
页数:12
相关论文
共 50 条
  • [21] Generative Model with Semantic Embedding and Integrated Classifier for Generalized Zero-Shot Learning
    Pambala, Ayyappa Kumar
    Dutta, Titir
    Biswas, Soma
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1226 - 1235
  • [22] Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning
    Ge, Jiannan
    Xie, Hongtao
    Min, Shaobo
    Zhang, Yongdong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1406 - 1414
  • [23] Learning visual-and-semantic knowledge embedding for zero-shot image classification
    Dehui Kong
    Xiliang Li
    Shaofan Wang
    Jinghua Li
    Baocai Yin
    Applied Intelligence, 2023, 53 : 2250 - 2264
  • [24] Learning visual-and-semantic knowledge embedding for zero-shot image classification
    Kong, Dehui
    Li, Xiliang
    Wang, Shaofan
    Li, Jinghua
    Yin, Baocai
    APPLIED INTELLIGENCE, 2023, 53 (02) : 2250 - 2264
  • [25] Contrastive Embedding for Generalized Zero-Shot Learning
    Han, Zongyan
    Fu, Zhenyong
    Chen, Shuo
    Yang, Jian
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2371 - 2381
  • [26] Transductive Unbiased Embedding for Zero-Shot Learning
    Song, Jie
    Shen, Chengchao
    Yang, Yezhou
    Liu, Yang
    Song, Mingli
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1024 - 1033
  • [27] Disentangled Ontology Embedding for Zero-shot Learning
    Geng, Yuxia
    Chen, Jiaoyan
    Zhang, Wen
    Xu, Yajing
    Chen, Zhuo
    Pan, Jeff Z.
    Huang, Yufeng
    Xiong, Feiyu
    Chen, Huajun
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 443 - 453
  • [28] Learning a Deep Embedding Model for Zero-Shot Learning
    Zhang, Li
    Xiang, Tao
    Gong, Shaogang
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3010 - 3019
  • [29] Adversarial Zero-Shot Learning with Semantic Augmentation
    Tong, Bin
    Klinkigt, Martin
    Chen, Junwen
    Cui, Xiankun
    Kong, Quan
    Murakami, Tomokazu
    Kobayashi, Yoshiyuki
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2476 - 2483
  • [30] Preserving Semantic Relations for Zero-Shot Learning
    Annadani, Yashas
    Biswas, Soma
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7603 - 7612