Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration

被引:14
|
作者
Saxena, Divya [1 ]
Cao, Jiannong [1 ]
Xu, Jiahao [1 ]
Kulshrestha, Tarun [1 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
关键词
D O I
10.1109/CVPR52729.2023.01557
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training Generative Adversarial Networks (GANs) on high-fidelity images usually requires a vast number of training images. Recent research on GAN tickets reveals that dense GANs models contain sparse sub-networks or lottery tickets that, when trained separately, yield better results under limited data. However, finding GANs tickets requires an expensive process of train-prune-retrain. In this paper, we propose Re-GAN, a data-efficient GANs training that dynamically reconfigures GANs architecture during training to explore different sub-network structures in training time. Our method repeatedly prunes unimportant connections to regularize GANs network and regrows them to reduce the risk of prematurely pruning important connections. Re-GAN stabilizes the GANs models with less data and offers an alternative to the existing GANs tickets and progressive growing methods. We demonstrate that Re-GAN is a generic training methodology which achieves stability on datasets of varying sizes, domains, and resolutions (CIFAR-10, Tiny-ImageNet, and multiple few-shot generation datasets) as well as different GANs architectures (SNGAN, ProGAN, StyleGAN2 and AutoGAN). Re-GAN also improves performance when combined with the recent augmentation approaches. Moreover, Re-GAN requires fewer floating-point operations (FLOPs) and less training time by removing the unimportant connections during GANs training while maintaining comparable or even generating higher-quality samples. When compared to state-of-the-art StyleGAN2, our method outperforms without requiring any additional fine-tuning step. Code can be found at this link: https://github.com/IntellicentAI-Lab/Re-GAN
引用
收藏
页码:16230 / 16240
页数:11
相关论文
共 50 条
  • [31] PerSim: Data-efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators
    Agarwal, Anish
    Alomar, Abdullah
    Alumootil, Varkey
    Shah, Devavrat
    Shen, Dennis
    Xu, Zhi
    Yang, Cindy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Towards Data-efficient Continuous Learning for Edge Video Analytics via Smart Caching
    Zhang, Lei
    Gao, Guanyu
    Zhang, Huaizheng
    PROCEEDINGS OF THE TWENTIETH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2022, 2022, : 1136 - 1140
  • [33] Data-Efficient Performance Modeling for Configurable Big Data Frameworks by Reducing Information Overlap Between Training Examples
    Liu, Zhiqiang
    Shi, Xuanhua
    Jin, Hai
    BIG DATA RESEARCH, 2022, 30
  • [34] Data-Efficient Double-Win Lottery Tickets from Robust Pre-training
    Chen, Tianlong
    Zhang, Zhenyu
    Liu, Sijia
    Zhang, Yang
    Chang, Shiyu
    Wang, Zhangyang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [35] Data-Efficient Task Generalization via Probabilistic Model-Based Meta Reinforcement Learning
    Bhardwaj, Arjun
    Rothfuss, Jonas
    Sukhija, Bhavya
    As, Yarden
    Hutter, Marco
    Coros, Stelian
    Krause, Andreas
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3918 - 3925
  • [36] Learning a Geometric Representation for Data-Efficient Depth Estimation via Gradient Field and Contrastive Loss
    Shim, Dongseok
    Kim, H. Jin
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13634 - 13640
  • [37] Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning
    Yang, Yi
    Zhu, Yanqiao
    Cui, Hejie
    Kan, Xuan
    He, Lifang
    Guo, Ying
    Yang, Carl
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4743 - 4751
  • [38] Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements
    Zhang, Senlin
    Chen, Huayan
    Liu, Meiqin
    Zhang, Qunfei
    SENSORS, 2017, 17 (11)
  • [39] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
    Tong, Zhan
    Song, Yibing
    Wang, Jue
    Wang, Limin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [40] Data-Efficient Policy Selection for Navigation in Partial Maps via Subgoal-Based Abstraction
    Paudel, Abhishek
    Stein, Gregory J.
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 11281 - 11288