InfoVAE: Balancing Learning and Inference in Variational Autoencoders

被引:0
|
作者
Zhao, Shengjia [1 ]
Song, Jiaming [1 ]
Ermon, Stefano [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (InfoVAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics.
引用
收藏
页码:5885 / 5892
页数:8
相关论文
共 50 条
  • [1] Inference Suboptimality in Variational Autoencoders
    Cremer, Chris
    Li, Xuechen
    Duvenaud, David
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [2] Recursive Inference for Variational Autoencoders
    Kim, Minyoung
    Pavlovic, Vladimir
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Training Variational Autoencoders with Buffered Stochastic Variational Inference
    Shu, Rui
    Bui, Hung H.
    Whang, Jay
    Ermon, Stefano
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [4] Learning Latent Subspaces in Variational Autoencoders
    Klys, Jack
    Snell, Jake
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [5] Balancing Active Inference and Active Learning with Deep Variational Predictive Coding for EEG
    Ofner, Andre
    Stober, Sebastian
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3839 - 3844
  • [6] Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders
    Kim, Minyoung
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 244 - 253
  • [7] Assessing Differentially Private Variational Autoencoders Under Membership Inference
    Bernau, Daniel
    Robl, Jonas
    Kerschbaum, Florian
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVI, DBSEC 2022, 2022, 13383 : 3 - 14
  • [8] Learning Manifold Dimensions with Conditional Variational Autoencoders
    Zheng, Yijia
    He, Tong
    Qiu, Yixuan
    Wipf, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [9] Variational Autoencoders with Triplet Loss for Representation Learning
    Isil, Cagatay
    Solmaz, Berkan
    Koc, Aykut
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [10] Learning hard quantum distributions with variational autoencoders
    Rocchetto, Andrea
    Grant, Edward
    Strelchuk, Sergii
    Carleo, Giuseppe
    Severini, Simone
    NPJ QUANTUM INFORMATION, 2018, 4