A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes

被引:3
|
作者
Oyedotun, Oyebade K. [1 ]
Papadopoulos, Konstantinos [1 ]
Aouada, Djamila [1 ]
机构
[1] Univ Luxembourg, Interdisciplinary Ctr Secur Reliabil & Trust SnT, L-1855 Luxembourg, Luxembourg
关键词
Neural network; Large batch size; Generalization gap; Optimization; SINGULAR-VALUE DECOMPOSITION;
D O I
10.1007/s10489-022-04230-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are typically optimized using various forms of mini-batch gradient descent algorithm. A major motivation for mini-batch gradient descent is that with a suitably chosen batch size, available computing resources can be optimally utilized (including parallelization) for fast model training. However, many works report the progressive loss of model generalization when the training batch size is increased beyond some limits. This is a scenario commonly referred to as generalization gap. Although several works have proposed different methods for alleviating the generalization gap problem, a unanimous account for understanding generalization gap is still lacking in the literature. This is especially important given that recent works have observed that several proposed solutions for generalization gap problem such learning rate scaling and increased training budget do not indeed resolve it. As such, our main exposition in this paper is to investigate and provide new perspectives for the source of generalization loss for DNNs trained with a large batch size. Our analysis suggests that large training batch size results in increased near-rank loss of units' activation (i.e. output) tensors, which consequently impacts model optimization and generalization. Extensive experiments are performed for validation on popular DNN models such as VGG-16, residual network (ResNet-56) and LeNet-5 using CIFAR-10, CIFAR-100, Fashion-MNIST and MNIST datasets.
引用
收藏
页码:15621 / 15637
页数:17
相关论文
共 50 条
  • [21] Batch-sequential algorithm for neural networks trained with entropic criteria
    Santos, JM
    de Sá, JM
    Alexandre, LA
    ARTIFICIAL NEURAL NETWORKS: FORMAL MODELS AND THEIR APPLICATIONS - ICANN 2005, PT 2, PROCEEDINGS, 2005, 3697 : 91 - 96
  • [22] Colour Visual Coding in trained Deep Neural Networks
    Rafegas, Ivet
    Vanrell, Maria
    PERCEPTION, 2016, 45 : 214 - 214
  • [23] Applying a new localized generalization error model to design neural networks trained with extreme learning machine
    Qiang Liu
    Jianping Yin
    Victor C. M. Leung
    Jun-Hai Zhai
    Zhiping Cai
    Jiarun Lin
    Neural Computing and Applications, 2016, 27 : 59 - 66
  • [24] Applying a new localized generalization error model to design neural networks trained with extreme learning machine
    Liu, Qiang
    Yin, Jianping
    Leung, Victor C. M.
    Zhai, Jun-Hai
    Cai, Zhiping
    Lin, Jiarun
    NEURAL COMPUTING & APPLICATIONS, 2016, 27 (01): : 59 - 66
  • [25] An adaptive Drop method for deep neural networks regularization: Estimation of DropConnect hyperparameter using generalization gap
    Hssayni, El Houssaine
    Joudar, Nour-Eddine
    Ettaouil, Mohamed
    KNOWLEDGE-BASED SYSTEMS, 2022, 253
  • [26] Generalization Bounds of Deep Neural Networks With τ -Mixing Samples
    Liu, Liyuan
    Chen, Yaohui
    Li, Weifu
    Wang, Yingjie
    Gu, Bin
    Zheng, Feng
    Chen, Hong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [27] VOVU: A Method for Predicting Generalization in Deep Neural Networks
    Wang, Juan
    Ge, Liangzhu
    Liu, Guorui
    Li, Guoyan
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [28] Abstraction Mechanisms Predict Generalization in Deep Neural Networks
    Gain, Alex
    Siegelmann, Hava
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [29] Understanding the importance of process alarms based on the analysis of deep recurrent neural networks trained for fault isolation
    Dorgo, Gyula
    Pigler, Peter
    Abonyi, Janos
    JOURNAL OF CHEMOMETRICS, 2018, 32 (04)
  • [30] A generalization bound of deep neural networks for dependent data
    Do, Quan Huu
    Nguyen, Binh T.
    Ho, Lam Si Tung
    STATISTICS & PROBABILITY LETTERS, 2024, 208