Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

被引:58
|
作者
Liu, Yen-Cheng [1 ]
Yeh, Yu-Ying [1 ]
Fu, Tzu-Chien [2 ]
Wang, Sheng-De [1 ]
Chiu, Wei-Chen [3 ]
Wang, Yu-Chiang Frank [1 ]
机构
[1] Natl Taiwan Univ, Dept Elect Engn, Taipei, Taiwan
[2] Northwestern Univ, Dept Elect Engn & Comp Sci, Evanston, IL 60208 USA
[3] Natl Chiao Tung Univ, Dept Comp Sci, Hsinchu, Taiwan
关键词
D O I
10.1109/CVPR.2018.00924
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.
引用
收藏
页码:8867 / 8876
页数:10
相关论文
共 50 条
  • [1] Learning Disentangled Representation for Multimodal Cross-Domain Sentiment Analysis
    Zhang, Yuhao
    Zhang, Ying
    Guo, Wenya
    Cai, Xiangrui
    Yuan, Xiaojie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (10) : 7956 - 7966
  • [2] Disentangled Representation for Cross-Domain Medical Image Segmentation
    Wang, Jie
    Zhong, Chaoliang
    Feng, Cheng
    Zhang, Ying
    Sun, Jun
    Yokota, Yasuto
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [3] Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies
    Achille, Alessandro
    Eccles, Tom
    Matthey, Loic
    Burgess, Christopher P.
    Watters, Nick
    Lerchner, Alexander
    Higgins, Irina
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] FedDCSR: Federated Cross-domain Sequential Recommendation via Disentangled Representation Learning
    Zhang, Hongyu
    Zheng, Dongyi
    Yang, Xu
    Feng, Jiyuan
    Liao, Qing
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 535 - 543
  • [5] DisenCDR: Learning Disentangled Representations for Cross-Domain Recommendation
    Cao, Jiangxia
    Lin, Xixun
    Cong, Xin
    Ya, Jing
    Liu, Tingwen
    Wang, Bin
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 267 - 277
  • [6] Cross-domain Face Presentation Attack Detection via Multi-domain Disentangled Representation Learning
    Wang, Guoqing
    Han, Hu
    Shan, Shiguang
    Chen, Xilin
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6677 - 6686
  • [7] DADRnet: Cross-domain image dehazing via domain adaptation and disentangled representation
    Li, Xiaopeng
    Yu, Hu
    Zhao, Chen
    Fan, Cien
    Zou, Lian
    NEUROCOMPUTING, 2023, 544
  • [8] Cross-Domain Microscopy Cell Counting By Disentangled Transfer Learning
    Wang, Zuhui
    TRUSTWORTHY MACHINE LEARNING FOR HEALTHCARE, TML4H 2023, 2023, 13932 : 93 - 105
  • [9] Representation Learning for Imbalanced Cross-Domain Classification
    Cheng, Lu
    Guo, Ruocheng
    Candan, K. Selcuk
    Liu, Huan
    PROCEEDINGS OF THE 2020 SIAM INTERNATIONAL CONFERENCE ON DATA MINING (SDM), 2020, : 478 - 486
  • [10] Unsupervised Cross-Domain Word Representation Learning
    Bollegala, Danushka
    Maehara, Takanori
    Kawarabayashi, Ken-Ichi
    PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, 2015, : 730 - 740