The Weights Reset Technique for Deep Neural Networks Implicit Regularization

被引:0
|
作者
Plusch, Grigoriy [1 ]
Arsenyev-Obraztsov, Sergey [1 ]
Kochueva, Olga [1 ]
机构
[1] Natl Univ Oil & Gas Gubkin Univ, Dept Appl Math & Comp Modeling, 65 Leninsky Prospekt, Moscow 119991, Russia
关键词
machine learning; deep learning; implicit regularization; computer vision; REPRESENTATIONS;
D O I
10.3390/computation11080148
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality
    Ohn, Ilsang
    Kim, Yongdai
    NEURAL COMPUTATION, 2022, 34 (02) : 476 - 517
  • [42] Foothill: A Quasiconvex Regularization for Edge Computing of Deep Neural Networks
    Belbahri, Mouloud
    Sari, Eyyub
    Darabi, Sajad
    Nia, Vahid Partovi
    IMAGE ANALYSIS AND RECOGNITION (ICIAR 2019), PT II, 2019, 11663 : 3 - 14
  • [43] Batch-wise Regularization of Deep Neural Networks for Interpretability
    Burkart, Nadia
    Faller, Philipp M.
    Peinsipp, Elisabeth
    Huber, Marco F.
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2020, : 216 - 222
  • [44] A Learning Technique for Deep Belief Neural Networks
    Golovko, Vladimir
    Kroshchanka, Aliaksandr
    Rubanau, Uladzimir
    Jankowski, Stanislaw
    NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE, ICNNAI 2014, 2014, 440 : 136 - 146
  • [45] Implicit Regularization via Neural Feature Alignment
    Baratin, Aristide
    George, Thomas
    Laurent, Cesar
    Hjelm, R. Devon
    Lajoie, Guillaume
    Vincent, Pascal
    Lacoste-Julien, Simon
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [46] A novel companion objective function for regularization of deep convolutional neural networks
    Sun, Weichen
    Su, Fei
    IMAGE AND VISION COMPUTING, 2017, 60 : 58 - 63
  • [47] SWAP-NODE: A REGULARIZATION APPROACH FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Yamashita, Takayoshi
    Tanaka, Masayuki
    Yamauchi, Yuji
    Fujiyoshi, Hironobu
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2475 - 2479
  • [48] LOW: Training deep neural networks by learning optimal sample weights
    Santiago, Carlos
    Barata, Catarina
    Sasdelli, Michele
    Carneiro, Gustavo
    Nascimento, Jacinto C.
    PATTERN RECOGNITION, 2021, 110
  • [49] Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
    Xiao, Xuanzhe
    Li, Zeng
    Xie, Chuanlong
    Zhou, Fengwei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 236 - 247
  • [50] Learning regularization parameters of inverse problems via deep neural networks
    Afkham, Babak Maboudi
    Chung, Julianne
    Chung, Matthias
    INVERSE PROBLEMS, 2021, 37 (10)