Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

被引:2
|
作者
Takahashi, Hiroshi [1 ]
Iwata, Tomoharu [1 ]
Kumagai, Atsutoshi [1 ]
Kanai, Sekitoshi [1 ]
Yamada, Masanori [1 ]
Yamanaka, Yuuki [1 ]
Kashima, Hisashi [2 ]
机构
[1] NTT, Nairobi, Kenya
[2] Kyoto Univ, Kyoto, Japan
关键词
Variational autoencoder; Multi-task learning;
D O I
10.1145/3534678.3539291
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The variational autoencoder (VAE) is a powerful latent variable model for unsupervised representation learning. However, it does not work well in case of insufficient data points. To improve the performance in such situations, the conditional VAE (CVAE) is widely used, which aims to share task-invariant knowledge with multiple tasks through the task-invariant latent variable. In the CVAE, the posterior of the latent variable given the data point and task is regularized by the task-invariant prior, which is modeled by the standard Gaussian distribution. Although this regularization encourages independence between the latent variable and task, the latent variable remains dependent on the task. To reduce this task-dependency, the previous work introduced an additional regularizer. However, its learned representation does not work well on the target tasks. In this study, we theoretically investigate why the CVAE cannot sufficiently reduce the task-dependency and show that the simple standard Gaussian prior is one of the causes. Based on this, we propose a theoretical optimal prior for reducing the task-dependency. In addition, we theoretically show that unlike the previous work, our learned representation works well on the target tasks. Experiments on various datasets show that our approach obtains better task-invariant representations, which improves the performances of various downstream applications such as density estimation and classification.
引用
收藏
页码:1739 / 1748
页数:10
相关论文
共 50 条
  • [31] Variational autoencoders learn transferrable representations of metabolomics data
    Daniel P. Gomari
    Annalise Schweickart
    Leandro Cerchietti
    Elisabeth Paietta
    Hugo Fernandez
    Hassen Al-Amin
    Karsten Suhre
    Jan Krumsiek
    Communications Biology, 5
  • [32] Learning Latent Subspaces in Variational Autoencoders
    Klys, Jack
    Snell, Jake
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [33] Task-dependent optimal representations for cerebellar learning
    Xie, Marjorie
    Muscinelli, Samuel P.
    Harris, Kameron Decker
    Litwin-Kumar, Ashok
    ELIFE, 2023, 12
  • [34] Learning Graph Variational Autoencoders with Constraints and Structured Priors for Conditional Indoor 3D Scene Generation
    Chattopadhyay, Aditya
    Zhang, Xi
    Wipf, David Paul
    Arora, Himanshu
    Vidal, Rene
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 785 - 794
  • [35] Learning Grounded Meaning Representations with Autoencoders
    Silberer, Carina
    Lapata, Mirella
    PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2014, : 721 - 732
  • [36] Laplacian Autoencoders for Learning Stochastic Representations
    Miani, Marco
    Warburg, Frederik
    Moreno-Munoz, Pablo
    Detlefsen, Nicke Skafte
    Hauberg, Soren
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [37] LGSim: local task-invariant and global task-specific similarity for few-shot classification
    Wenjing Li
    Zhongcheng Wu
    Jun Zhang
    Tingting Ren
    Fang Li
    Neural Computing and Applications, 2020, 32 : 13065 - 13076
  • [38] PriorVAE: encoding spatial priors with variational autoencoders for small-area estimation
    Semenova, Elizaveta
    Xu, Yidan
    Howes, Adam
    Rashid, Theo
    Bhatt, Samir
    Mishra, Swapnil
    Flaxman, Seth
    JOURNAL OF THE ROYAL SOCIETY INTERFACE, 2022, 19 (191)
  • [39] Creating Latent Representations of Synthesizer Patches using Variational Autoencoders
    Peachey, Matthew
    Oore, Sageev
    Malloch, Joseph
    2023 4TH INTERNATIONAL SYMPOSIUM ON THE INTERNET OF SOUNDS, 2023, : 83 - 89
  • [40] Learning state representations with robotic priors
    Rico Jonschkowski
    Oliver Brock
    Autonomous Robots, 2015, 39 : 407 - 428