A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes

被引:3
|
作者
Oyedotun, Oyebade K. [1 ]
Papadopoulos, Konstantinos [1 ]
Aouada, Djamila [1 ]
机构
[1] Univ Luxembourg, Interdisciplinary Ctr Secur Reliabil & Trust SnT, L-1855 Luxembourg, Luxembourg
关键词
Neural network; Large batch size; Generalization gap; Optimization; SINGULAR-VALUE DECOMPOSITION;
D O I
10.1007/s10489-022-04230-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are typically optimized using various forms of mini-batch gradient descent algorithm. A major motivation for mini-batch gradient descent is that with a suitably chosen batch size, available computing resources can be optimally utilized (including parallelization) for fast model training. However, many works report the progressive loss of model generalization when the training batch size is increased beyond some limits. This is a scenario commonly referred to as generalization gap. Although several works have proposed different methods for alleviating the generalization gap problem, a unanimous account for understanding generalization gap is still lacking in the literature. This is especially important given that recent works have observed that several proposed solutions for generalization gap problem such learning rate scaling and increased training budget do not indeed resolve it. As such, our main exposition in this paper is to investigate and provide new perspectives for the source of generalization loss for DNNs trained with a large batch size. Our analysis suggests that large training batch size results in increased near-rank loss of units' activation (i.e. output) tensors, which consequently impacts model optimization and generalization. Extensive experiments are performed for validation on popular DNN models such as VGG-16, residual network (ResNet-56) and LeNet-5 using CIFAR-10, CIFAR-100, Fashion-MNIST and MNIST datasets.
引用
收藏
页码:15621 / 15637
页数:17
相关论文
共 50 条
  • [31] Abstraction Mechanisms Predict Generalization in Deep Neural Networks
    Gain, Alex
    Siegelmann, Hava
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [32] Deep neural networks - a developmental perspective
    Juang, Biing Hwang
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2016, 5
  • [33] FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks
    Bae, Jonghyun
    Lee, Jongsung
    Jin, Yunho
    Son, Sam
    Kim, Shine
    Jang, Hakbeom
    Ham, Tae Jun
    Lee, Jae W.
    PROCEEDINGS OF THE 19TH USENIX CONFERENCE ON FILE AND STORAGE TECHNOLOGIES (FAST '21), 2021, : 387 - 401
  • [34] Submodular Batch Selection for Training Deep Neural Networks
    Joseph, K. J.
    Teja, Vamshi R.
    Singh, Krishnakant
    Balasubramanian, Vineeth N.
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2677 - 2683
  • [35] Generalization properties of feed-forward neural networks trained on Lorenz systems
    Scher, Sebastian
    Messori, Gabriele
    NONLINEAR PROCESSES IN GEOPHYSICS, 2019, 26 (04) : 381 - 399
  • [36] Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities
    Chaudhury, Subhajit
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13714 - 13715
  • [37] FEEDBACK NEURAL NETWORKS - NEW CHARACTERISTICS AND A GENERALIZATION
    KAK, SC
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 1993, 12 (02) : 263 - 278
  • [38] A Universal VAD Based on Jointly Trained Deep Neural Networks
    Wang, Qing
    Du, Jun
    Bao, Xiao
    Wang, Zi-Rui
    Dai, Li-Rong
    Lee, Chin-Hui
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 2282 - 2286
  • [39] TRP: Trained Rank Pruning for Efficient Deep Neural Networks
    Xu, Yuhui
    Li, Yuxi
    Zhang, Shuai
    Wen, Wei
    Wang, Botao
    Qi, Yingyong
    Chen, Yiran
    Lin, Weiyao
    Xiong, Hongkai
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 977 - 983
  • [40] Teaming Up Pre-Trained Deep Neural Networks
    Deabes, Wael
    Abdel-Hakim, Alaa E.
    2018 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INFORMATION SECURITY (ICSPIS), 2018, : 73 - 76