Improving generative adversarial networks with simple latent distributions

被引:2
|
作者
Zhang, Shufei [1 ]
Huang, Kaizhu [1 ]
Qian, Zhuang [1 ]
Zhang, Rui [2 ]
Hussain, Amir [3 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Sch Sci, SIP, Suzhou 215123, Peoples R China
[3] Edinburgh Napier Univ, Sch Comp, Merchiston Campus, Edinburgh EH10 5DT, Midlothian, Scotland
来源
NEURAL COMPUTING & APPLICATIONS | 2021年 / 33卷 / 20期
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金;
关键词
Generative adversarial network; Deep generative model; Information theory; Deep learning; Generation;
D O I
10.1007/s00521-021-05946-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative Adversarial Networks (GANs) have drawn great attention recently since they are the powerful models to generate high-quality images. Although GANs have achieved great success, they usually suffer from unstable training and consequently may lead to the poor generations in some cases. Such drawback is argued mainly due to the difficulties in measuring the divergence between the highly complicated the real and fake data distributions, which are normally in the high-dimensional space. To tackle this problem, previous researchers attempt to search a proper divergence capable of measuring the departure of the complex distributions. In contrast, we attempt to alleviate this problem from a different perspective: while retaining the information as much as possible of the original high dimensional distributions, we learn and leverage an additional latent space where simple distributions are defined in a low-dimensional space; as a result, we can readily compute the distance between two simple distributions with an available divergence measurement. Concretely, to retain the data information, the mutual information is maximized between the variables for the high dimensional complex distributions and the low dimensional simple distributions. The departure of the resulting simple distributions are then measured in the original way of GANs. Additionally, for simplifying the optimization further, we optimize directly the lower bound for mutual information. Termed as SimpleGAN, we conduct the proposed approach over the several different baseline models, i.e., conventional GANs, DCGAN, WGAN-GP, WGAN-GP-res, and LSWGAN-GP on the benchmark CIFAR-10 and STL-10 datasets. SimpleGAN shows the obvious superiority on these baseline models. Furthermore, in comparison with the existing methods measuring directly the distribution departure in the high-dimensional space, our method clearly demonstrates its superiority. Finally, a series of experiments show the advantages of the proposed SimpleGAN.
引用
收藏
页码:13193 / 13203
页数:11
相关论文
共 50 条
  • [1] Improving generative adversarial networks with simple latent distributions
    Shufei Zhang
    Kaizhu Huang
    Zhuang Qian
    Rui Zhang
    Amir Hussain
    Neural Computing and Applications, 2021, 33 : 13193 - 13203
  • [2] Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
    Li, Yang
    Mo, Yichuan
    Shi, Liangliang
    Yan, Junchi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Optimizing Latent Distributions for Non-Adversarial Generative Networks
    Guo, Tianyu
    Xu, Chang
    Shi, Boxin
    Xu, Chao
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2657 - 2672
  • [4] Improving generative adversarial networks for speech enhancement through regularization of latent representations
    Yang, Fan
    Wang, Ziteng
    Li, Junfeng
    Xia, Risheng
    Yan, Yonghong
    SPEECH COMMUNICATION, 2020, 118 (118) : 1 - 9
  • [5] Latent Space Conditioning on Generative Adversarial Networks
    Durall, Ricard
    Ho, Kalun
    Pfreundt, Franz-Josef
    Keuper, Janis
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 4: VISAPP, 2021, : 24 - 34
  • [6] Improving Evolutionary Generative Adversarial Networks
    Liu, Zheping
    Sabar, Nasser
    Song, Andy
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 691 - 702
  • [7] Latent Fingerprint Enhancement using Generative Adversarial Networks
    Joshi, Indu
    Anand, Adithya
    Vatsa, Mayank
    Singh, Richa
    Roy, Sumantra Dutta
    Kalra, Prem Kumar
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 895 - 903
  • [8] Latent Dirichlet allocation based generative adversarial networks
    Pan, Lili
    Cheng, Shen
    Liu, Jian
    Tang, Peijun
    Wang, Bowen
    Ren, Yazhou
    Xu, Zenglin
    NEURAL NETWORKS, 2020, 132 : 461 - 476
  • [9] Evolutionary Latent Space Exploration of Generative Adversarial Networks
    Fernandes, Paulo
    Correia, Joao
    Machado, Penousal
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2020, 2020, 12104 : 595 - 609
  • [10] ClusterGAN: Latent Space Clustering in Generative Adversarial Networks
    Mukherjee, Sudipto
    Asnani, Himanshu
    Lin, Eugene
    Kannan, Sreeram
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4610 - 4617