Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling

被引:0
|
作者
Bartler, Alexander [1 ]
Wiewel, Felix [1 ]
Mauch, Lukas [1 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Inst Signal Proc & Syst Theory, Stuttgart, Germany
关键词
variational autoencoder; discrete latent variables; importance sampling;
D O I
10.23919/eusipco.2019.8902811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Bhattacharyya Error and Divergence using Variational Importance Sampling
    Olsen, Peder A.
    Hershey, John R.
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 937 - 940
  • [22] Speculative Sampling in Variational Autoencoders for Dialogue Response Generation
    Sato, Shoetsu
    Yoshinaga, Naoki
    Toyoda, Masashi
    Kitsuregawa, Masaru
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 4739 - 4745
  • [23] Disentangling shared and private latent factors in multimodal Variational Autoencoders
    Martens, Kaspar
    Yau, Christopher
    MACHINE LEARNING IN COMPUTATIONAL BIOLOGY, VOL 240, 2023, 240
  • [24] Variational approximation for importance sampling
    Xiao Su
    Yuguo Chen
    Computational Statistics, 2021, 36 : 1901 - 1930
  • [25] Importance sampling as a variational approximation
    Nott, David J.
    Li Jialiang
    Fielding, Mark
    STATISTICS & PROBABILITY LETTERS, 2011, 81 (08) : 1052 - 1055
  • [26] Facial Attribute Editing by Latent Space Adversarial Variational Autoencoders
    Li, Defang
    Zhang, Min
    Chen, Weifu
    Feng, Guocan
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1337 - 1342
  • [27] IMPROVING SYNTHESIZER PROGRAMMING FROM VARIATIONAL AUTOENCODERS LATENT SPACE
    Le Valliant, Gwendal
    Dutoit, Thierry
    Dekeyser, Sebastien
    2021 24TH INTERNATIONAL CONFERENCE ON DIGITAL AUDIO EFFECTS (DAFX), 2021, : 276 - 283
  • [28] Variational approximation for importance sampling
    Su, Xiao
    Chen, Yuguo
    COMPUTATIONAL STATISTICS, 2021, 36 (03) : 1901 - 1930
  • [29] Systematic control of collective variables learned from variational autoencoders
    Monroe, Jacob I.
    Shen, Vincent K.
    JOURNAL OF CHEMICAL PHYSICS, 2022, 157 (09):
  • [30] Fast Decoding in Sequence Models Using Discrete Latent Variables
    Kaiser, Lukasz
    Roy, Aurko
    Vaswani, Ashish
    Parmar, Niki
    Bengio, Samy
    Uszkoreit, Jakob
    Shazeer, Noam
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80