Spectral Batch Normalization: Normalization in the Frequency Domain

被引:0
|
作者
Cakaj, Rinor [1 ,2 ]
Mehnert, Jens [1 ]
Yang, Bin [3 ]
机构
[1] Robert Bosch GmbH, Signal Proc, D-71229 Leonberg, Germany
[2] Univ Stuttgart, D-71229 Leonberg, Germany
[3] Univ Stuttgart, ISS, D-70550 Stuttgart, Germany
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
关键词
NEURAL-NETWORKS;
D O I
10.1109/IJCNN54540.2023.10191931
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Regularization is a set of techniques that are used to improve the generalization ability of deep neural networks. In this paper, we introduce spectral batch normalization (SBN), a novel effective method to improve generalization by normalizing feature maps in the frequency (spectral) domain. The activations of residual networks without batch normalization (BN) tend to explode exponentially in the depth of the network at initialization. This leads to extremely large feature map norms even though the parameters are relatively small. These explosive dynamics can be very detrimental to learning. BN makes weight decay regularization on the scaling factors gamma, beta approximately equivalent to an additive penalty on the norm of the feature maps, which prevents extremely large feature map norms to a certain degree. It was previously shown that preventing explosive growth at the final layer at initialization and during training in ResNets can recover a large part of Batch Normalization's generalization boost. However, we show experimentally that, despite the approximate additive penalty of BN, feature maps in deep neural networks (DNNs) tend to explode at the beginning of the training and that feature maps of DNNs contain large values during the whole training. This phenomenon also occurs in a weakened form in non-residual networks. Intuitively, it is not preferred to have large values in feature maps since they have too much influence on the prediction in contrast to other parts of the feature map. SBN addresses large feature maps by normalizing them in the frequency domain. In our experiments, we empirically show that SBN prevents exploding feature maps at initialization and large feature map values during the training. Moreover, the normalization of feature maps in the frequency domain leads to more uniform distributed frequency components. This discourages the DNNs to rely on single frequency components of feature maps. These, together with other effects (e.g. noise injection, scaling and shifting of the feature map) of SBN, have a regularizing effect on the training of residual and non-residual networks. We show experimentally that using SBN in addition to standard regularization methods improves the performance of DNNs by a relevant margin, e.g. ResNet50 on CIFAR-100 by 2.31%, on ImageNet by 0.71% (from 76.80% to 77.51%) and VGG19 on CIFAR-100 by 0.66%.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Batch Normalization Preconditioning for Neural Network Training
    Lange, Susanna
    Helfrich, Kyle
    Ye, Qiang
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [42] Why Batch Normalization Works? A Buckling Perspective
    Chcn, Li
    Fci, Hongxiao
    Xiao, Yanru
    He, Jiabao
    Li, Haifeng
    2017 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (IEEE ICIA 2017), 2017, : 1184 - 1189
  • [43] How Does Batch Normalization Help Optimization?
    Santurkar, Shibani
    Tsipras, Dimitris
    Ilyas, Andrew
    Madry, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [44] Heuristic normalization procedure for batch effect correction
    Arthur Yosef
    Eli Shnaider
    Moti Schneider
    Michael Gurevich
    Soft Computing, 2023, 27 : 7813 - 7829
  • [45] Double Forward Propagation for Memorized Batch Normalization
    Guo, Yong
    Wu, Qingyao
    Deng, Chaorui
    Chen, Jian
    Tan, Mingkui
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3134 - 3141
  • [46] BNET: Batch Normalization With Enhanced Linear Transformation
    Xu, Yuhui
    Xie, Lingxi
    Xie, Cihang
    Dai, Wenrui
    Mei, Jieru
    Qiao, Siyuan
    Shen, Wei
    Xiong, Hongkai
    Yuille, Alan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 9225 - 9232
  • [47] Batch Normalization Preconditioning for Neural Network Training
    Lange, Susanna
    Helfrich, Kyle
    Ye, Qiang
    Journal of Machine Learning Research, 2022, 23 : 1 - 41
  • [48] Representative Batch Normalization for Scene Text Recognition
    Sun, Yajie
    Cao, Xiaoling
    Sun, Yingying
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (07): : 2390 - 2406
  • [49] VOWEL NORMALIZATION BY FREQUENCY WARPING
    MATSUMOTO, H
    WAKITA, H
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1978, 64 : S180 - S180
  • [50] Heuristic normalization procedure for batch effect correction
    Yosef, Arthur
    Shnaider, Eli
    Schneider, Moti
    Gurevich, Michael
    SOFT COMPUTING, 2023, 27 (12) : 7813 - 7829