Learnable GAN Regularization for Improving Training Stability in Limited Data Paradigm

被引:1
|
作者
Singh, Nakul [1 ]
Sandhan, Tushar [1 ]
机构
[1] Indian Inst Technol Kanpur, Elect Engn, Percept & Intelligence Lab, Kanpur, Uttar Pradesh, India
来源
COMPUTER VISION AND IMAGE PROCESSING, CVIP 2023, PT II | 2024年 / 2010卷
关键词
GAN; Generator; Discriminator; Regularization; Overfitting; Limited data;
D O I
10.1007/978-3-031-58174-8_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative adversarial networks (GAN) are generative models that require large amounts of training data to ensure a stable learning trajectory during the training phase. In the absence of sufficient data, GAN suffers from unstable training dynamics that adversely affect the quality of generated data. This behavior is attributed to the adversarial learning process and the classifier-like functioning of the discriminator. In data-deficient cases, the adversarial learning procedure leads to the discriminator memorizing the data instead of generalizing. Due to their wide applicability in several generative tasks, improving the GAN performance in the limited data paradigm will further advance their usage in data-scarce fields. Therefore to circumvent this issue, we propose a loss-regularized GAN, which improves the performance by forcing a strong regularization on the discriminator. We conduct several experiments using limited data from the CIFAR-10 and CIFAR-100 datasets to investigate the effectiveness of the proposed model in overcoming discriminator overfitting in the lack of abundant data. We observe consistent performance improvement across all the experiments compared to state-of-the-art models.
引用
收藏
页码:542 / 554
页数:13
相关论文
共 50 条
  • [41] IMPROVING PLDA SPEAKER VERIFICATION WITH LIMITED DEVELOPMENT DATA
    Kanagasundaram, Ahilan
    Dean, David
    Sridharan, Sridha
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [42] Enhanced Specific Emitter Identification With Limited Data Through Dual Implicit Regularization
    Peng, Yang
    Zhang, Xile
    Guo, Lantu
    Ben, Cui
    Liu, Yuchao
    Wang, Yu
    Lin, Yun
    Gui, Guan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (15): : 26395 - 26405
  • [43] Training data selection for improving discriminative training of acoustic models
    Liu, Shih-Hung
    Chu, Fang-Hui
    Lin, Shih-Hsiang
    Lee, Hung-Shin
    Chen, Berlin
    2007 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, VOLS 1 AND 2, 2007, : 284 - 289
  • [44] Training data selection for improving discriminative training of acoustic models
    Chen, Berlin
    Liu, Shih-Hung
    Chu, Fang-Hui
    PATTERN RECOGNITION LETTERS, 2009, 30 (13) : 1228 - 1235
  • [45] Improving Pretrained Language Model Fine-Tuning With Noise Stability Regularization
    Hua, Hang
    Li, Xingjian
    Dou, Dejing
    Xu, Cheng-Zhong
    Luo, Jiebo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1898 - 1910
  • [46] A strategy for improving GAN generation: Contrastive self-adversarial training
    Zhang, Hansen
    Yang, Miao
    Wang, Haiwen
    Qiu, Yuquan
    Neurocomputing, 2025, 637
  • [47] A New Contrastive GAN With Data Augmentation for Surface Defect Recognition Under Limited Data
    Du, Zongwei
    Gao, Liang
    Li, Xinyu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [48] Improving thermal stability of sulfide solid electrolytes: An intrinsic theoretical paradigm
    Wang, Shuo
    Wu, Yujing
    Li, Hong
    Chen, Liquan
    Wu, Fan
    INFOMAT, 2022, 4 (08)
  • [49] A New Contrastive GAN With Data Augmentation for Surface Defect Recognition Under Limited Data
    Du, Zongwei
    Gao, Liang
    Li, Xinyu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [50] Detecting Tear Gas CanistersWith Limited Training Data
    D'Cruz, Ashwin
    Tegho, Christopher
    Greaves, Sean
    Kermode, Lachlan
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1283 - 1291