Training neural networks with structured noise improves classification and generalization

被引:0
|
作者
Benedetti, Marco [1 ]
Ventura, Enrico [1 ,2 ]
机构
[1] Sapienza Univ Roma, Dipartimento Fis, Ple A Moro 2, I-00185 Rome, Italy
[2] Univ PSL, Ecole Normale Super, Lab Phys, ENS, F-75005 Paris, France
关键词
recurrent neural networks; perceptron learning; unlearning; associative memory; PATTERNS; STORAGE; SPACE; MODEL;
D O I
10.1088/1751-8121/ad7b8f
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The beneficial role of noise-injection in learning is a consolidated concept in the field of artificial neural networks, suggesting that even biological systems might take advantage of similar mechanisms to optimize their performance. The training-with-noise (TWN) algorithm proposed by Gardner and collaborators is an emblematic example of a noise-injection procedure in recurrent networks, which can be used to model biological neural systems. We show how adding structure to noisy training data can substantially improve the algorithm performance, allowing the network to approach perfect retrieval of the memories and wide basins of attraction, even in the scenario of maximal injected noise. We also prove that the so-called Hebbian Unlearning rule coincides with the TWN algorithm when noise is maximal and data are stable fixed points of the network dynamics.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
    Jain, Samyak
    Addepalli, Sravanti
    Sahu, Pawan Kumar
    Dey, Priyam
    Babu, R. Venkatesh
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16048 - 16059
  • [2] Neural Structured Learning: Training Neural Networks with Structured Signals
    Gopalan, Arjun
    Juan, Da-Cheng
    Magalhaes, Cesar Ilharco
    Ferng, Chun-Sung
    Heydon, Allan
    Lu, Chun-Ta
    Pham, Philip
    Yu, George
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3501 - 3502
  • [3] Neural Structured Learning: Training Neural Networks with Structured Signals
    Gopalan, Arjun
    Juan, Da-Cheng
    Magalhaes, Cesar Ilharco
    Ferng, Chun-Sung
    Heydon, Allan
    Lu, Chun-Ta
    Pham, Philip
    Yu, George
    Fan, Yicheng
    Wang, Yueqi
    WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 1150 - 1153
  • [4] ADANOISE: Training neural networks with adaptive noise for imbalanced data classification
    Shin K.
    Kang S.
    Expert Systems with Applications, 2022, 192
  • [5] Structured neural networks for signal classification
    Bruzzone, L
    Roli, F
    Serpico, SB
    SIGNAL PROCESSING, 1998, 64 (03) : 271 - 290
  • [6] Training and Generalization Errors for Underparameterized Neural Networks
    Martin Xavier, Daniel
    Chamoin, Ludovic
    Fribourg, Laurent
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 3926 - 3931
  • [7] An analysis of noise in recurrent neural networks: Convergence and generalization
    Jim, KC
    Giles, CL
    Horne, BG
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (06): : 1424 - 1438
  • [8] Recursive training of neural networks for classification
    Aladjem, M
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 496 - 503
  • [9] Complete and representative training of neural networks: A generalization study using double noise injection and natural images
    Zhang, Chao
    van der Baan, Mirko
    GEOPHYSICS, 2021, 86 (03) : V197 - V206
  • [10] Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks
    Wu, Yihan
    Bojchevski, Aleksandar
    Huang, Heng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10417 - 10425