InfoVAE: Balancing Learning and Inference in Variational Autoencoders

被引:0
|
作者
Zhao, Shengjia [1 ]
Song, Jiaming [1 ]
Ermon, Stefano [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (InfoVAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics.
引用
收藏
页码:5885 / 5892
页数:8
相关论文
共 50 条
  • [21] Arbitrary conditional inference in variational autoencoders via fast prior network training
    Wu, Ga
    Domke, Justin
    Sanner, Scott
    MACHINE LEARNING, 2022, 111 (07) : 2537 - 2559
  • [22] Arbitrary conditional inference in variational autoencoders via fast prior network training
    Ga Wu
    Justin Domke
    Scott Sanner
    Machine Learning, 2022, 111 : 2537 - 2559
  • [23] LEARNING HARD ALIGNMENTS WITH VARIATIONAL INFERENCE
    Lawson, Dieterich
    Chiu, Chung-Cheng
    Tucker, George
    Raffel, Colin
    Swersky, Kevin
    Jaitly, Navdeep
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5799 - 5803
  • [24] Optimizing Few-Shot Learning Based on Variational Autoencoders
    Wei, Ruoqi
    Mahmood, Ausif
    ENTROPY, 2021, 23 (11)
  • [25] Bayesian mixture variational autoencoders for multi-modal learning
    Keng-Te Liao
    Bo-Wei Huang
    Chih-Chun Yang
    Shou-De Lin
    Machine Learning, 2022, 111 : 4329 - 4357
  • [26] Graph Representation Learning via Ladder Gamma Variational Autoencoders
    Sarkar, Arindam
    Mehta, Nikhil
    Rai, Piyush
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5604 - 5611
  • [27] Mixture variational autoencoders
    Jiang, Shuoran
    Chen, Yarui
    Yang, Jucheng
    Zhang, Chuanlei
    Zhao, Tingting
    PATTERN RECOGNITION LETTERS, 2019, 128 : 263 - 269
  • [28] Learning Efficient, Collective Monte Carlo Moves with Variational Autoencoders
    Monroe, Jacob, I
    Shen, Vincent K.
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2022, 18 (06) : 3622 - 3636
  • [29] Task-Conditioned Variational Autoencoders for Learning Movement Primitives
    Noseworthy, Michael
    Paul, Rohan
    Roy, Subhro
    Park, Daehyung
    Roy, Nicholas
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [30] Deep learning for photovoltaic defect detection using variational autoencoders
    Westraadt, Edward J.
    Brettenny, Warren J.
    Clohessy, Chantelle M.
    SOUTH AFRICAN JOURNAL OF SCIENCE, 2023, 119 (1-2)