The Weights Reset Technique for Deep Neural Networks Implicit Regularization

被引:0
|
作者
Plusch, Grigoriy [1 ]
Arsenyev-Obraztsov, Sergey [1 ]
Kochueva, Olga [1 ]
机构
[1] Natl Univ Oil & Gas Gubkin Univ, Dept Appl Math & Comp Modeling, 65 Leninsky Prospekt, Moscow 119991, Russia
关键词
machine learning; deep learning; implicit regularization; computer vision; REPRESENTATIONS;
D O I
10.3390/computation11080148
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
    Razin, Noam
    Maman, Asaf
    Cohen, Nadav
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [2] Threshout Regularization for Deep Neural Networks
    Williams, Travis
    Li, Robert
    SOUTHEASTCON 2021, 2021, : 728 - 735
  • [3] Sampling weights of deep neural networks
    Bolager, Erik Lien
    Burak, Iryna
    Datar, Chinmay
    Sun, Qing
    Dietrich, Felix
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] IMPLICIT SALIENCY IN DEEP NEURAL NETWORKS
    Sun, Yutong
    Prabhushankar, Mohit
    AlRegib, Ghassan
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2915 - 2919
  • [5] Combining Explicit and Implicit Regularization for Efficient Learning in Deep Networks
    Zhao, Dan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [7] Regularization of deep neural networks with spectral dropout
    Khan, Salman H.
    Hayat, Munawar
    Porikli, Fatih
    NEURAL NETWORKS, 2019, 110 : 82 - 90
  • [8] Sparse synthesis regularization with deep neural networks
    Obmann, Daniel
    Schwab, Johannes
    Haltmeier, Markus
    2019 13TH INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2019,
  • [9] Group sparse regularization for deep neural networks
    Scardapane, Simone
    Comminiello, Danilo
    Hussain, Amir
    Uncini, Aurelio
    NEUROCOMPUTING, 2017, 241 : 81 - 89
  • [10] Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks
    Gidel, Gauthier
    Bach, Francis
    Lacoste-Julien, Simon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32