Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling

被引:0
|
作者
Bartler, Alexander [1 ]
Wiewel, Felix [1 ]
Mauch, Lukas [1 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Inst Signal Proc & Syst Theory, Stuttgart, Germany
关键词
variational autoencoder; discrete latent variables; importance sampling;
D O I
10.23919/eusipco.2019.8902811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Generating In-Between Images Through Learned Latent Space Representation Using Variational Autoencoders
    Cristovao, Paulino
    Nakada, Hidemoto
    Tanimura, Yusuke
    Asoh, Hideki
    IEEE ACCESS, 2020, 8 : 149456 - 149467
  • [32] Drug-Target Binding Affinity Prediction in a Continuous Latent Space Using Variational Autoencoders
    Zhao, Lingling
    Zhu, Yan
    Wen, Naifeng
    Wang, Chunyu
    Wang, Junjie
    Yuan, Yongfeng
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2024, 21 (05) : 1458 - 1467
  • [33] Discrete choice models with latent variables using subjective data
    Morikawa, T
    Sasaki, K
    TRAVEL BEHAVIOUR RESEARCH: UPDATING THE STATE OF PLAY, 1998, : 435 - 455
  • [34] μ-Forcing: Training Variational Recurrent Autoencoders for Text Generation
    Liu, Dayiheng
    Xue, Yang
    He, Feng
    Chen, Yuanyuan
    Lv, Jiancheng
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2020, 19 (01)
  • [35] SPEECH DEREVERBERATION USING VARIATIONAL AUTOENCODERS
    Baby, Deepak
    Bourlard, Herve
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5784 - 5788
  • [36] Energy disaggregation using variational autoencoders
    Langevin, Antoine
    Carbonneau, Marc-Andre
    Cheriet, Mohamed
    Gagnon, Ghyslain
    ENERGY AND BUILDINGS, 2022, 254
  • [37] Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders
    Mercatali, Giangiacomo
    Freitas, Andre
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3547 - 3556
  • [38] DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors
    Vahdat, Arash
    Andriyash, Evgeny
    Macready, William G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [39] DVAE plus plus : Discrete Variational Autoencoders with Overlapping Transformations
    Vandat, Arash
    Macready, William G.
    Bian, Zhengbing
    Khoshaman, Amir
    Andriyash, Evgeny
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [40] EdVAE: Mitigating codebook collapse with evidential discrete variational autoencoders
    Baykal, Gulcin
    Kandemir, Melih
    Unal, Gozde
    PATTERN RECOGNITION, 2024, 156