Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling

被引:0
|
作者
Bartler, Alexander [1 ]
Wiewel, Felix [1 ]
Mauch, Lukas [1 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Inst Signal Proc & Syst Theory, Stuttgart, Germany
关键词
variational autoencoder; discrete latent variables; importance sampling;
D O I
10.23919/eusipco.2019.8902811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Mixed variational flows for discrete variables
    Diluvi, Gian Carlo
    Bloem-Reddy, Benjamin
    Campbell, Trevor
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [42] Extracting a biologically latent space of lung cancer epigenetics with variational autoencoders
    Wang, Zhenxing
    Wang, Yadong
    BMC BIOINFORMATICS, 2019, 20 (Suppl 18)
  • [43] High Dimensional Latent Space Variational AutoEncoders for Fake News Detection
    Sadiq, Saad
    Wagner, Nicolas
    Shyu, Mei-Ling
    Feaster, Daniel
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 437 - 442
  • [44] Extracting a biologically latent space of lung cancer epigenetics with variational autoencoders
    Zhenxing Wang
    Yadong Wang
    BMC Bioinformatics, 20
  • [45] Expanding variational autoencoders for learning and exploiting latent representations in search distributions
    Garciarena, Unai
    Santana, Roberto
    Mendiburu, Alexander
    GECCO'18: PROCEEDINGS OF THE 2018 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2018, : 849 - 856
  • [46] Latent variables in discrete choice experiments
    Rungie, Cam M.
    Coote, Leonard V.
    Louviere, Jordan J.
    JOURNAL OF CHOICE MODELLING, 2012, 5 (3.) : 145 - 156
  • [47] Different Latent Variables Learning in Variational Autoencoder
    Xu, Qingyang
    Yang, Yiqin
    Wu, Zhe
    Zhang, Li
    2017 4TH INTERNATIONAL CONFERENCE ON INFORMATION, CYBERNETICS AND COMPUTATIONAL SOCIAL SYSTEMS (ICCSS), 2017, : 508 - 511
  • [48] Variational Importance Sampling: Initial Findings
    Hernandez-Gonzalez, Jeronimo
    Capdevila, Joan
    Cerquides, Jesus
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2019, 319 : 95 - 104
  • [49] Stein Variational Adaptive Importance Sampling
    Han, Jun
    Liu, Qiang
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [50] Blind Channel Equalization using Variational Autoencoders
    Caciularu, Avi
    Burshtein, David
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,