Different Latent Variables Learning in Variational Autoencoder

被引:0
|
作者
Xu, Qingyang [1 ]
Yang, Yiqin [1 ]
Wu, Zhe [1 ]
Zhang, Li [1 ]
机构
[1] Shandong Univ, Sch Mech Elect & Informat Engn, Weihai 264209, Peoples R China
关键词
variational autoencoder; probabilistic model; latent Variable; MNIST;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Unsupervised learning is a good neural network training way. However, the unsupervised learning algorithm is rare. The generative model is an interesting algorithm which can generate the similar data as the sample data by building a probabilistic model of the input data, and it can be used for unsupervised learning. Variational autoencoder is a typical generative model which is different from common autoencoder that a probabilistic parameter layer follows the hidden layer. Some new data can be reconstructed according to probabilistic model parameters. The probabilistic model parameter is the latent variable. In this paper, we want to do some research to test the data reconstruct effect of the variational autoencoder by different latent variables. According to the simulation, the more latent variables the more style of the sample is.
引用
收藏
页码:508 / 511
页数:4
相关论文
共 50 条
  • [31] Variational Autoencoder for Deep Learning of Images, Labels and Captions
    Pu, Yunchen
    Gan, Zhe
    Henao, Ricardo
    Yuan, Xin
    Li, Chunyuan
    Stevens, Andrew
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [32] Representation and Reconstruction of Image-Based Structural Patterns of Glaucomatous Defects Using only Two Latent Variables from a Variational Autoencoder
    Wang, Jui-Kai
    Kardon, Randy H.
    Garvin, Mona K.
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2021, 2021, 12970 : 159 - 167
  • [33] Representation learning of resting state fMRI with variational autoencoder
    Kim, Jung-Hoon
    Zhang, Yizhen
    Han, Kuan
    Wen, Zheyu
    Choi, Minkyu
    Liu, Zhongming
    NEUROIMAGE, 2021, 241
  • [34] Infinite Variational Autoencoder for Semi-Supervised Learning
    Abbasnejad, M. Ehsan
    Dick, Anthony
    van den Hengel, Anton
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 781 - 790
  • [35] FREPD: A Robust Federated Learning Framework on Variational Autoencoder
    Gu, Zhipin
    He, Liangzhong
    Li, Peiyan
    Sun, Peng
    Shi, Jiangyong
    Yang, Yuexiang
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2021, 39 (03): : 307 - 320
  • [36] A Contrastive Learning Approach for Training Variational Autoencoder Priors
    Aneja, Jyoti
    Schwing, Alexander G.
    Kautz, Jan
    Vahdat, Arash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [37] Enhancing IoT Healthcare with Federated Learning and Variational Autoencoder
    Bhatti, Dost Muhammad Saqib
    Choi, Bong Jun
    SENSORS, 2024, 24 (11)
  • [38] Unsupervised Disentanglement Learning via Dirichlet Variational Autoencoder
    Xu, Kunxiong
    Fan, Wentao
    Liu, Xin
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE. THEORY AND APPLICATIONS, IEA/AIE 2023, PT I, 2023, 13925 : 341 - 352
  • [39] Variational Autoencoder-Based Vehicle Trajectory Prediction with an Interpretable Latent Space
    Neumeier, Marion
    Tollkuhn, Andreas
    Berberich, Thomas
    Botsch, Michael
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 820 - 827
  • [40] Deep clustering analysis via variational autoencoder with Gamma mixture latent embeddings
    Guo, Jiaxun
    Fan, Wentao
    Amayri, Manar
    Bouguila, Nizar
    NEURAL NETWORKS, 2025, 183