Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling

被引:0
|
作者
Bartler, Alexander [1 ]
Wiewel, Felix [1 ]
Mauch, Lukas [1 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Inst Signal Proc & Syst Theory, Stuttgart, Germany
关键词
variational autoencoder; discrete latent variables; importance sampling;
D O I
10.23919/eusipco.2019.8902811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Unsupervised data imputation with multiple importance sampling variational autoencoders
    Kuang, Shenfen
    Huang, Yewen
    Song, Jie
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [2] Sensitivity analysis of latent variables in Variational Autoencoders for Dermoscopic Image Analysis
    Casti, Paola
    Mencattini, Arianna
    Cardarelli, Sara
    Antonelli, Gianni
    Filippi, Joanna
    D'Orazio, Michele
    Martinelli, Eugenio
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA 2022), 2022,
  • [3] Estimation of multidimensional item response theory models with correlated latent variables using variational autoencoders
    Converse, Geoffrey
    Curi, Mariana
    Oliveira, Suely
    Templin, Jonathan
    MACHINE LEARNING, 2021, 110 (06) : 1463 - 1480
  • [4] Estimation of multidimensional item response theory models with correlated latent variables using variational autoencoders
    Geoffrey Converse
    Mariana Curi
    Suely Oliveira
    Jonathan Templin
    Machine Learning, 2021, 110 : 1463 - 1480
  • [5] Learning Latent Subspaces in Variational Autoencoders
    Klys, Jack
    Snell, Jake
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] Creating Latent Representations of Synthesizer Patches using Variational Autoencoders
    Peachey, Matthew
    Oore, Sageev
    Malloch, Joseph
    2023 4TH INTERNATIONAL SYMPOSIUM ON THE INTERNET OF SOUNDS, 2023, : 83 - 89
  • [7] Optimizing training trajectories in variational autoencoders via latent Bayesian optimization approach *
    Biswas, Arpan
    Vasudevan, Rama
    Ziatdinov, Maxim
    Kalinin, Sergei, V
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (01):
  • [8] A variational approximation for Bayesian networks with discrete and continuous latent variables
    Murphy, KP
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 1999, : 457 - 466
  • [9] Disentangling the Latent Space of (Variational) Autoencoders for NLP
    Brunner, Gino
    Wang, Yuyi
    Wattenhofer, Roger
    Weigelt, Michael
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS (UKCI), 2019, 840 : 163 - 168
  • [10] Spherical Latent Spaces for Stable Variational Autoencoders
    Xu, Jiacheng
    Durrett, Greg
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4503 - 4513