Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

被引:2
|
作者
Takahashi, Hiroshi [1 ]
Iwata, Tomoharu [1 ]
Kumagai, Atsutoshi [1 ]
Kanai, Sekitoshi [1 ]
Yamada, Masanori [1 ]
Yamanaka, Yuuki [1 ]
Kashima, Hisashi [2 ]
机构
[1] NTT, Nairobi, Kenya
[2] Kyoto Univ, Kyoto, Japan
关键词
Variational autoencoder; Multi-task learning;
D O I
10.1145/3534678.3539291
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The variational autoencoder (VAE) is a powerful latent variable model for unsupervised representation learning. However, it does not work well in case of insufficient data points. To improve the performance in such situations, the conditional VAE (CVAE) is widely used, which aims to share task-invariant knowledge with multiple tasks through the task-invariant latent variable. In the CVAE, the posterior of the latent variable given the data point and task is regularized by the task-invariant prior, which is modeled by the standard Gaussian distribution. Although this regularization encourages independence between the latent variable and task, the latent variable remains dependent on the task. To reduce this task-dependency, the previous work introduced an additional regularizer. However, its learned representation does not work well on the target tasks. In this study, we theoretically investigate why the CVAE cannot sufficiently reduce the task-dependency and show that the simple standard Gaussian prior is one of the causes. Based on this, we propose a theoretical optimal prior for reducing the task-dependency. In addition, we theoretically show that unlike the previous work, our learned representation works well on the target tasks. Experiments on various datasets show that our approach obtains better task-invariant representations, which improves the performances of various downstream applications such as density estimation and classification.
引用
收藏
页码:1739 / 1748
页数:10
相关论文
共 50 条
  • [41] Task-Invariant Learning of Continuous Joint Kinematics during Steady-State and Transient Ambulation Using Ultrasound Sensing
    Jahanandish, M. Hassan
    Rabe, Kaitlin G.
    Srinivas, Abhishek
    Fey, Nicholas P.
    Hoyt, Kenneth
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 10536 - 10542
  • [42] Learning state representations with robotic priors
    Jonschkowski, Rico
    Brock, Oliver
    AUTONOMOUS ROBOTS, 2015, 39 (03) : 407 - 428
  • [43] LGSim: local task-invariant and global task-specific similarity for few-shot classification
    Li, Wenjing
    Wu, Zhongcheng
    Zhang, Jun
    Ren, Tingting
    Li, Fang
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (16): : 13065 - 13076
  • [44] The dorsomedial prefrontal cortex computes task-invariant relative subjective value for self and other
    Piva, Matthew
    Velnoskey, Kayla
    Jia, Ruonan
    Nair, Amrita
    Levy, Ifat
    Chang, Steve W. C.
    ELIFE, 2019, 8
  • [45] Learning Manifold Dimensions with Conditional Variational Autoencoders
    Zheng, Yijia
    He, Tong
    Qiu, Yixuan
    Wipf, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [46] Variational Autoencoders with Triplet Loss for Representation Learning
    Isil, Cagatay
    Solmaz, Berkan
    Koc, Aykut
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [47] Learning hard quantum distributions with variational autoencoders
    Rocchetto, Andrea
    Grant, Edward
    Strelchuk, Sergii
    Carleo, Giuseppe
    Severini, Simone
    NPJ QUANTUM INFORMATION, 2018, 4
  • [48] Learning conditional variational autoencoders with missing covariates
    Ramchandran, Siddharth
    Tikhonov, Gleb
    Lonnroth, Otto
    Tiikkainen, Pekka
    Lahdesmaki, Harri
    PATTERN RECOGNITION, 2024, 147
  • [49] InfoVAE: Balancing Learning and Inference in Variational Autoencoders
    Zhao, Shengjia
    Song, Jiaming
    Ermon, Stefano
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5885 - 5892
  • [50] Learning hard quantum distributions with variational autoencoders
    Andrea Rocchetto
    Edward Grant
    Sergii Strelchuk
    Giuseppe Carleo
    Simone Severini
    npj Quantum Information, 4