LaFea: Learning Latent Representation Beyond Feature for Universal Domain Adaptation

被引:1
|
作者
Lv, Qingxuan [1 ]
Li, Yuezun [1 ]
Dong, Junyu [1 ]
Guo, Ziqian [1 ]
机构
[1] Ocean Univ China, Dept Comp Sci & Technol, Qingdao 266100, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
Transfer learning; adversarial discriminator; autoencoder; universal domain adaptation;
D O I
10.1109/TCSVT.2023.3267765
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Universal Domain Adaptation (UniDA) is a recent advent problem that aims to transfer the knowledge from the source domain to the target domain without any prior knowledge on label sets. The main challenge is to separate common samples from private samples in the target domain. In general, existing methods achieve this goal by performing domain adaptation only on the features extracted by the backbone networks. However, solely relying on the learning of the backbone network may not fully exploit the effectiveness of features, due to that 1) the discrepancy between two domains can naturally distract the learning of backbone network and 2) the irrelevant content of samples (e. g., backgrounds) likely goes through the backbone network, and accordingly may hinder the learning of domain-informative features. To this end, we describe a new method to provide extra guidance to the learning of the backbone network based on the latent representation beyond features (LaFea). We are motivated by the fact that the latent representation can be learned to contain the domain-relevant information scattered in features, and the learning of this latent representation can naturally promote the effectiveness of corresponding features in return. To achieve this goal, we develop a simple GAN-style architecture to transform features into the latent representation and propose new objectives to adversarially learn this representation. It should be noted that the latent representation only serves as an auxiliary in training, but it is not needed in inference. Extensive experiments on four datasets corroborate the superiority of our method compared to the state-of-the-arts.
引用
收藏
页码:6733 / 6746
页数:14
相关论文
共 50 条
  • [1] Joint metric and feature representation learning for unsupervised domain adaptation
    Xie, Yue
    Du, Zhekai
    Li, Jingjing
    Jing, Mengmeng
    Chen, Erpeng
    Lu, Ke
    KNOWLEDGE-BASED SYSTEMS, 2020, 192
  • [2] Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation
    Xing, Jinwei
    Nagata, Takashi
    Chen, Kexin
    Zou, Xinyun
    Neftci, Emre
    Krichmar, Jeffrey L.
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10452 - 10459
  • [3] TLR: TRANSFER LATENT REPRESENTATION FOR UNSUPERVISED DOMAIN ADAPTATION
    Xiao, Pan
    Du, Bo
    Wu, Jia
    Zhang, Lefei
    Hu, Ruimin
    Li, Xuelong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [4] Representation learning for unsupervised domain adaptation
    Xu Y.
    Yan H.
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2021, 53 (02): : 40 - 46
  • [5] Hierarchical feature disentangling network for universal domain adaptation
    Gao, Yuan
    Chen, Peipeng
    Gao, Yue
    Wang, Jinpeng
    Pan, YoungSun
    Ma, Andy J.
    PATTERN RECOGNITION, 2022, 127
  • [6] Distinguishable IQ Feature Representation for Domain-Adaptation Learning of WiFi Device Fingerprints
    Elmaghbub, Abdurrahman
    Hamdaoui, Bechir
    IEEE Transactions on Machine Learning in Communications and Networking, 2024, 2 : 1404 - 1423
  • [7] Domain Adaptation Transfer Learning by Kernel Representation Adaptation
    Chen, Xiaoyi
    Lengelle, Regis
    PATTERN RECOGNITION APPLICATIONS AND METHODS, 2018, 10857 : 45 - 61
  • [8] Towards Interpretable Feature Representation for Domain Adaptation Problem
    Fang, Yi
    Chen, Zhi-Jie
    Zhou, Qianwei
    Li, Xiao-Xin
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 61 - 68
  • [9] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [10] Latent subspace sparse representation based unsupervised domain adaptation
    Liu, Shuai
    Sun, Hao
    Zhao, Fumin
    Zhou, Shilin
    MIPPR 2015: PATTERN RECOGNITION AND COMPUTER VISION, 2015, 9813