A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes

被引:3
|
作者
Oyedotun, Oyebade K. [1 ]
Papadopoulos, Konstantinos [1 ]
Aouada, Djamila [1 ]
机构
[1] Univ Luxembourg, Interdisciplinary Ctr Secur Reliabil & Trust SnT, L-1855 Luxembourg, Luxembourg
关键词
Neural network; Large batch size; Generalization gap; Optimization; SINGULAR-VALUE DECOMPOSITION;
D O I
10.1007/s10489-022-04230-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are typically optimized using various forms of mini-batch gradient descent algorithm. A major motivation for mini-batch gradient descent is that with a suitably chosen batch size, available computing resources can be optimally utilized (including parallelization) for fast model training. However, many works report the progressive loss of model generalization when the training batch size is increased beyond some limits. This is a scenario commonly referred to as generalization gap. Although several works have proposed different methods for alleviating the generalization gap problem, a unanimous account for understanding generalization gap is still lacking in the literature. This is especially important given that recent works have observed that several proposed solutions for generalization gap problem such learning rate scaling and increased training budget do not indeed resolve it. As such, our main exposition in this paper is to investigate and provide new perspectives for the source of generalization loss for DNNs trained with a large batch size. Our analysis suggests that large training batch size results in increased near-rank loss of units' activation (i.e. output) tensors, which consequently impacts model optimization and generalization. Extensive experiments are performed for validation on popular DNN models such as VGG-16, residual network (ResNet-56) and LeNet-5 using CIFAR-10, CIFAR-100, Fashion-MNIST and MNIST datasets.
引用
收藏
页码:15621 / 15637
页数:17
相关论文
共 50 条
  • [1] A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes
    Oyebade K. Oyedotun
    Konstantinos Papadopoulos
    Djamila Aouada
    Applied Intelligence, 2023, 53 : 15621 - 15637
  • [2] Train longer, generalize better: closing the generalization gap in large batch training of neural networks
    Hoffer, Elad
    Hubara, Itay
    Soudry, Daniel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [3] Understanding and mitigating noise in trained deep neural networks
    Semenova, Nadezhda
    Larger, Laurent
    Brunner, Daniel
    NEURAL NETWORKS, 2022, 146 : 151 - 160
  • [4] Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels
    Chen, Pengfei
    Liao, Benben
    Chen, Guangyong
    Zhang, Shengyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Taming the Noisy Gradient: Train Deep Neural Networks with Small Batch Sizes
    Zhang, Yikai
    Qu, Hui
    Chen, Chao
    Metaxas, Dimitris
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4348 - 4354
  • [6] Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
    Chen, Jinghui
    Zhou, Dongruo
    Tang, Yiqi
    Yang, Ziyan
    Cao, Yuan
    Gu, Quanquan
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3267 - 3275
  • [7] New Perspective of Interpretability of Deep Neural Networks
    Kimura, Masanari
    Tanaka, Masayuki
    2020 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMPUTER TECHNOLOGIES (ICICT 2020), 2020, : 78 - 85
  • [8] Universal mean-field upper bound for the generalization gap of deep neural networks
    Ariosto, S.
    Pacelli, R.
    Ginelli, F.
    Gherardi, M.
    Rotondo, P.
    PHYSICAL REVIEW E, 2022, 105 (06)
  • [9] Understanding Attention and Generalization in Graph Neural Networks
    Knyazev, Boris
    Taylor, Graham W.
    Amer, Mohamed R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Open set task augmentation facilitates generalization of deep neural networks trained on small data sets
    Wadhah Zai El Amri
    Felix Reinhart
    Wolfram Schenck
    Neural Computing and Applications, 2022, 34 : 6067 - 6083