Learning Disentangled Joint Continuous and Discrete Representations

被引:0
|
作者
Dupont, Emilien [1 ]
机构
[1] Schlumberger Software Technol Innovat Ctr, Menlo Pk, CA 94025 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a framework for learning disentangled and interpretable jointly continuous and discrete representations in an unsupervised manner. By augmenting the continuous latent distribution of variational autoencoders with a relaxed discrete distribution and controlling the amount of information encoded in each latent unit, we show how continuous and categorical factors of variation can be discovered automatically from data. Experiments show that the framework disentangles continuous and discrete generative factors on various datasets and outperforms current disentangling methods when a discrete generative factor is prominent.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Learning Disentangled Discrete Representations
    Friede, David
    Reimers, Christian
    Stuckenschmidt, Heiner
    Niepert, Mathias
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT IV, 2023, 14172 : 593 - 609
  • [2] Continuous Disentangled Joint Space Learning for Domain Generalization
    Wang, Zizhou
    Wang, Yan
    Feng, Yangqin
    Du, Jiawei
    Liu, Yong
    Goh, Rick Siow Mong
    Zhen, Liangli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [3] Learning Disentangled Representations for Recommendation
    Ma, Jianxin
    Zhou, Chang
    Cui, Peng
    Yang, Hongxia
    Zhu, Wenwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Domain Agnostic Learning with Disentangled Representations
    Peng, Xingchao
    Huang, Zijun
    Sun, Ximeng
    Saenko, Kate
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Learning Disentangled Representations of Negation and Uncertainty
    Vasilakes, Jake
    Zerva, Chrysoula
    Miwa, Makoto
    Ananiadou, Sophia
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8380 - 8397
  • [6] A Contrastive Objective for Learning Disentangled Representations
    Kahana, Jonathan
    Hoshen, Yedid
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 579 - 595
  • [7] Learning disentangled representations in the imaging domain
    Liu, Xiao
    Sanchez, Pedro
    Thermos, Spyridon
    O'Neil, Alison Q.
    Tsaftaris, Sotirios A.
    MEDICAL IMAGE ANALYSIS, 2022, 80
  • [8] A Commentary on the Unsupervised Learning of Disentangled Representations
    Locatello, Francesco
    Bauer, Stefan
    Lucie, Mario
    Raetsch, Gunnar
    Gelly, Sylvain
    Schoelkopf, Bernhard
    Bachem, Olivier
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13681 - 13684
  • [9] On Learning Disentangled Representations for Gait Recognition
    Zhang, Ziyuan
    Tran, Luan
    Liu, Feng
    Liu, Xiaoming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) : 345 - 360
  • [10] Learning Disentangled Representations with the Wasserstein Autoencoder
    Gaujac, Benoit
    Feige, Ilya
    Barber, David
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 69 - 84