Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

被引:2
|
作者
Takahashi, Hiroshi [1 ]
Iwata, Tomoharu [1 ]
Kumagai, Atsutoshi [1 ]
Kanai, Sekitoshi [1 ]
Yamada, Masanori [1 ]
Yamanaka, Yuuki [1 ]
Kashima, Hisashi [2 ]
机构
[1] NTT, Nairobi, Kenya
[2] Kyoto Univ, Kyoto, Japan
关键词
Variational autoencoder; Multi-task learning;
D O I
10.1145/3534678.3539291
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The variational autoencoder (VAE) is a powerful latent variable model for unsupervised representation learning. However, it does not work well in case of insufficient data points. To improve the performance in such situations, the conditional VAE (CVAE) is widely used, which aims to share task-invariant knowledge with multiple tasks through the task-invariant latent variable. In the CVAE, the posterior of the latent variable given the data point and task is regularized by the task-invariant prior, which is modeled by the standard Gaussian distribution. Although this regularization encourages independence between the latent variable and task, the latent variable remains dependent on the task. To reduce this task-dependency, the previous work introduced an additional regularizer. However, its learned representation does not work well on the target tasks. In this study, we theoretically investigate why the CVAE cannot sufficiently reduce the task-dependency and show that the simple standard Gaussian prior is one of the causes. Based on this, we propose a theoretical optimal prior for reducing the task-dependency. In addition, we theoretically show that unlike the previous work, our learned representation works well on the target tasks. Experiments on various datasets show that our approach obtains better task-invariant representations, which improves the performances of various downstream applications such as density estimation and classification.
引用
收藏
页码:1739 / 1748
页数:10
相关论文
共 50 条
  • [1] Resampled Priors for Variational Autoencoders
    Bauer, Matthias
    Mnih, Andriy
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 66 - 75
  • [2] Optimal Task-Invariant Energetic Control for a Knee-Ankle Exoskeleton
    Lin, Jianping
    Divekar, Nikhil, V
    Lv, Ge
    Gregg, Robert D.
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021,
  • [3] Optimal Task-Invariant Energetic Control for a Knee-Ankle Exoskeleton
    Lin, Jianping
    Divekar, Nikhil, V
    Lv, Ge
    Gregg, Robert D.
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (05): : 1711 - 1716
  • [4] Learning minimal representations of stochastic processes with variational autoencoders
    Fernandez-Fernandez, Gabriel
    Manzo, Carlo
    Lewenstein, Maciej
    Dauphin, Alexandre
    Munoz-Gil, Gorka
    PHYSICAL REVIEW E, 2024, 110 (01)
  • [5] Learning Representations by Maximizing Mutual Information in Variational Autoencoders
    Rezaabad, Ali Lotfi
    Vishwanath, Sriram
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2729 - 2734
  • [6] A task-invariant cognitive reserve network
    Stern, Yaakov
    Gazes, Yunglin
    Razlighi, Qolomreza
    Steffener, Jason
    Habeck, Christian
    NEUROIMAGE, 2018, 178 : 36 - 45
  • [7] LEARNING SUBJECT-INVARIANT REPRESENTATIONS FROM SPEECH-EVOKED EEG USING VARIATIONAL AUTOENCODERS
    Bollens, Lies
    Francart, Tom
    Van Hamme, Hugo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1256 - 1260
  • [8] Task-invariant aspects of goodness in perceptual representation
    Lachmann, T
    van Leeuwen, C
    QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION A-HUMAN EXPERIMENTAL PSYCHOLOGY, 2005, 58 (07): : 1295 - 1310
  • [9] Item Recommendation with Variational Autoencoders and Heterogeneous Priors
    Karamanolakis, Giannis
    Cherian, Kevin Raji
    Narayan, Ananth Ravi
    Yuan, Jie
    Tang, Da
    Jebara, Tony
    PROCEEDINGS OF THE 3RD WORKSHOP ON DEEP LEARNING FOR RECOMMENDER SYSTEMS (DLRS), 2018, : 10 - 14
  • [10] Variational Autoencoders with Riemannian Brownian Motion Priors
    Kalatzis, Dimitris
    Eklund, David
    Arvanitidis, Georgios
    Hauberg, Soren
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119