Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

被引:2
|
作者
Takahashi, Hiroshi [1 ]
Iwata, Tomoharu [1 ]
Kumagai, Atsutoshi [1 ]
Kanai, Sekitoshi [1 ]
Yamada, Masanori [1 ]
Yamanaka, Yuuki [1 ]
Kashima, Hisashi [2 ]
机构
[1] NTT, Nairobi, Kenya
[2] Kyoto Univ, Kyoto, Japan
关键词
Variational autoencoder; Multi-task learning;
D O I
10.1145/3534678.3539291
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The variational autoencoder (VAE) is a powerful latent variable model for unsupervised representation learning. However, it does not work well in case of insufficient data points. To improve the performance in such situations, the conditional VAE (CVAE) is widely used, which aims to share task-invariant knowledge with multiple tasks through the task-invariant latent variable. In the CVAE, the posterior of the latent variable given the data point and task is regularized by the task-invariant prior, which is modeled by the standard Gaussian distribution. Although this regularization encourages independence between the latent variable and task, the latent variable remains dependent on the task. To reduce this task-dependency, the previous work introduced an additional regularizer. However, its learned representation does not work well on the target tasks. In this study, we theoretically investigate why the CVAE cannot sufficiently reduce the task-dependency and show that the simple standard Gaussian prior is one of the causes. Based on this, we propose a theoretical optimal prior for reducing the task-dependency. In addition, we theoretically show that unlike the previous work, our learned representation works well on the target tasks. Experiments on various datasets show that our approach obtains better task-invariant representations, which improves the performances of various downstream applications such as density estimation and classification.
引用
收藏
页码:1739 / 1748
页数:10
相关论文
共 50 条
  • [21] Identifying a task-invariant cognitive reserve network using task potency
    van Loenhoud, A. C.
    Habeck, C.
    van der Flier, W. M.
    Ossenkoppele, R.
    Stern, Y.
    NEUROIMAGE, 2020, 210
  • [22] Task-invariant Brain Responses to the Social Value of Faces
    Todorov, Alexander
    Said, Christopher P.
    Oosterhof, Nikolaas N.
    Engell, Andrew D.
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2011, 23 (10) : 2766 - 2781
  • [23] Variational Multi-Task Learning with Gumbel-Softmax Priors
    Shen, Jiayi
    Zhen, Xiantong
    Worring, Marcel
    Shao, Ling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [24] DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors
    Vahdat, Arash
    Andriyash, Evgeny
    Macready, William G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [25] Generating visual representations for zero-shot learning via adversarial learning and variational autoencoders
    Gull, Muqaddas
    Arif, Omar
    INTERNATIONAL JOURNAL OF GENERAL SYSTEMS, 2023, 52 (05) : 636 - 651
  • [26] Task-Invariant Centroidal Momentum Shaping for Lower-Limb Exoskeletons
    Yu, Miao
    Lv, Ge
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 2054 - 2060
  • [27] Localized task-invariant emotional valence encoding revealed by intracranial recordings
    Weisholtz, Daniel S.
    Kreiman, Gabriel
    Silbersweig, David A.
    Stern, Emily
    Cha, Brannon
    Butler, Tracy
    SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE, 2022, 17 (06) : 549 - 558
  • [28] Variational Autoencoder with Implicit Optimal Priors
    Takahashi, Hiroshi
    Iwata, Tomoharu
    Yamanaka, Yuki
    Yamada, Masanori
    Yagi, Satoshi
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5066 - 5073
  • [29] Variational Autoencoders to Learn Latent Representations of Speech Emotion
    Latif, Siddique
    Rana, Rajib
    Qadir, Junaid
    Epps, Julien
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3107 - 3111
  • [30] Variational autoencoders learn transferrable representations of metabolomics data
    Gomari, Daniel P.
    Schweickart, Annalise
    Cerchietti, Leandro
    Paietta, Elisabeth
    Fernandez, Hugo
    Al-Amin, Hassen
    Suhre, Karsten
    Krumsiek, Jan
    COMMUNICATIONS BIOLOGY, 2022, 5 (01)