Learnable GAN Regularization for Improving Training Stability in Limited Data Paradigm

被引:1
|
作者
Singh, Nakul [1 ]
Sandhan, Tushar [1 ]
机构
[1] Indian Inst Technol Kanpur, Elect Engn, Percept & Intelligence Lab, Kanpur, Uttar Pradesh, India
关键词
GAN; Generator; Discriminator; Regularization; Overfitting; Limited data;
D O I
10.1007/978-3-031-58174-8_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative adversarial networks (GAN) are generative models that require large amounts of training data to ensure a stable learning trajectory during the training phase. In the absence of sufficient data, GAN suffers from unstable training dynamics that adversely affect the quality of generated data. This behavior is attributed to the adversarial learning process and the classifier-like functioning of the discriminator. In data-deficient cases, the adversarial learning procedure leads to the discriminator memorizing the data instead of generalizing. Due to their wide applicability in several generative tasks, improving the GAN performance in the limited data paradigm will further advance their usage in data-scarce fields. Therefore to circumvent this issue, we propose a loss-regularized GAN, which improves the performance by forcing a strong regularization on the discriminator. We conduct several experiments using limited data from the CIFAR-10 and CIFAR-100 datasets to investigate the effectiveness of the proposed model in overcoming discriminator overfitting in the lack of abundant data. We observe consistent performance improvement across all the experiments compared to state-of-the-art models.
引用
收藏
页码:542 / 554
页数:13
相关论文
共 50 条
  • [1] Learnable GAN Regularization for Improving Training Stability in Limited Data Paradigm
    Singh, Nakul (nakul692k@gmail.com), 1600, Springer Science and Business Media Deutschland GmbH (2010 CCIS):
  • [2] DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data
    Fang, Tiantian
    Sun, Ruoyu
    Schwing, Alex
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Novel Regularization for Learning the Fuzzy Choquet Integral With Limited Training Data
    Kakula, Siva Krishna
    Pinar, Anthony J.
    Islam, Muhammad Aminul
    Anderson, Derek T.
    Havens, Timothy C.
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2021, 29 (10) : 2890 - 2901
  • [4] Learnable Graph Matching: A Practical Paradigm for Data Association
    He, Jiawei
    Huang, Zehao
    Wang, Naiyan
    Zhang, Zhaoxiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (07) : 4880 - 4895
  • [5] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data
    Jiang, Liming
    Dai, Bo
    Wu, Wayne
    Loy, Chen Change
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] Inhomogeneous regularization with limited and indirect data
    Han, Jihun
    Lee, Yoonsang
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2023, 428
  • [7] Towards a Better Understanding and Regularization of GAN Training Dynamics
    Nie, Weili
    Patel, Ankit B.
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 281 - 291
  • [8] DM-GAN: CNN hybrid vits for training GANs under limited data
    Yan, Longquan
    Yan, Ruixiang
    Chai, Bosong
    Geng, Guohua
    Zhou, Pengbo
    Gao, Jian
    PATTERN RECOGNITION, 2024, 156
  • [9] Improving FM-GAN Through Mixup Manifold Regularization
    Ghorban, Farzin
    Hasan, Nesreen
    Velten, Joerg
    Kummert, Anton
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [10] Improving Federated Learning on Heterogeneous Data via Serial Pipeline Training and Global Knowledge Regularization
    Luo, Yiyang
    Lu, Ting
    Chang, Shan
    Wang, Bingyue
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 851 - 858