Training neural networks with structured noise improves classification and generalization

被引:0
|
作者
Benedetti, Marco [1 ]
Ventura, Enrico [1 ,2 ]
机构
[1] Sapienza Univ Roma, Dipartimento Fis, Ple A Moro 2, I-00185 Rome, Italy
[2] Univ PSL, Ecole Normale Super, Lab Phys, ENS, F-75005 Paris, France
关键词
recurrent neural networks; perceptron learning; unlearning; associative memory; PATTERNS; STORAGE; SPACE; MODEL;
D O I
10.1088/1751-8121/ad7b8f
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The beneficial role of noise-injection in learning is a consolidated concept in the field of artificial neural networks, suggesting that even biological systems might take advantage of similar mechanisms to optimize their performance. The training-with-noise (TWN) algorithm proposed by Gardner and collaborators is an emblematic example of a noise-injection procedure in recurrent networks, which can be used to model biological neural systems. We show how adding structure to noisy training data can substantially improve the algorithm performance, allowing the network to approach perfect retrieval of the memories and wide basins of attraction, even in the scenario of maximal injected noise. We also prove that the so-called Hebbian Unlearning rule coincides with the TWN algorithm when noise is maximal and data are stable fixed points of the network dynamics.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] Self-learning Recursive Neural Networks for Structured Data Classification
    Bouchachia, Abdelhamid
    Ortner, Alexander
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 808 - 815
  • [42] Training Neural Networks with Random Noise Images for Adversarial Robustness
    Park, Ji-Young
    Liu, Lin
    Li, Jiuyong
    Liu, Jixue
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3358 - 3362
  • [43] Training neural networks by marginalizing out hidden layer noise
    Yanjun Li
    Ping Guo
    Neural Computing and Applications, 2018, 29 : 401 - 412
  • [44] Training neural networks by marginalizing out hidden layer noise
    Li, Yanjun
    Guo, Ping
    NEURAL COMPUTING & APPLICATIONS, 2018, 29 (09): : 401 - 412
  • [45] How training and testing histories affect generalization: a test of simple neural networks
    Ghirlanda, Stefano
    Enquist, Magnus
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2007, 362 (1479) : 449 - 454
  • [46] THE INTERPOLATION PHASE TRANSITION IN NEURAL NETWORKS: MEMORIZATION AND GENERALIZATION UNDER LAZY TRAINING
    Montanari, Andrea
    Zhong, Yiqiao
    ANNALS OF STATISTICS, 2022, 50 (05): : 2816 - 2847
  • [47] Generalization Error of Deep Neural Networks: Role of Classification Margin and Data Structure
    Sokolic, Jure
    Giryes, Raja
    Sapiro, Guillermo
    Rodrigues, Miguel R. D.
    2017 INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2017, : 147 - 151
  • [48] Using Supervised Pretraining to Improve Generalization of Neural Networks on Binary Classification Problems
    Peng, Alex Yuxuan
    Koh, Yun Sing
    Riddle, Patricia
    Pfahringer, Bernhard
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT I, 2019, 11051 : 410 - 425
  • [49] Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
    Chen, Jinghui
    Zhou, Dongruo
    Tang, Yiqi
    Yang, Ziyan
    Cao, Yuan
    Gu, Quanquan
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3267 - 3275
  • [50] Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents
    Khalifa, Muhammad
    Vyas, Yogarshi
    Wang, Shuai
    Horwood, Graham
    Mallya, Sunil
    Ballesteros, Miguel
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7499 - 7508