Hebbian learning of context in recurrent neural networks

被引:55
|
作者
Brunel, N [1 ]
机构
[1] UNIV ROMA LA SAPIENZA,INST FIS,IST NAZL FIS NUCL,I-00185 ROME,ITALY
关键词
D O I
10.1162/neco.1996.8.8.1677
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Single electrode recordings in the inferotemporal cortex of monkeys during delayed visual memory tasks provide evidence for attractor dynamics in the observed region. The persistent elevated delay activities could be internal representations of features of the learned visual stimuli shown to the monkey during training. When uncorrelated stimuli are presented during training in a fixed sequence, these experiments display significant correlations between the internal representations. Recently a simple model of attractor neural network has reproduced quantitatively the measured correlations. An underlying assumption of the model is that the synaptic matrix formed during the training phase contains in its efficacies information about the contiguity of persistent stimuli in the training sequence. We present here a simple unsupervised learning dynamics that produces such a synaptic matrix if sequences of stimuli are repeatedly presented to the network at fixed order. The resulting matrix is then shown to convert temporal correlations during training into spatial correlations between attractors. The scenario is that, in the presence of selective delay activity, at the presentation of each stimulus, the activity distribution in the neural assembly contains information of both the current stimulus and the previous one (carried by the attractor). Thus the recurrent synaptic matrix can code not only for each of the stimuli presented to the network but also for their context. We combine the idea that for learning to be effective, synaptic modification should be stochastic, with the fact that attractors provide learnable information about two consecutive stimuli. We calculate explicitly the probability distribution of synaptic efficacies as a function of training protocol, that is, the order in which stimuli are presented to the network. We then solve for the dynamics of a network composed of integrate-and-fire excitatory and inhibitory neurons with a matrix of synaptic collaterals resulting from the learning dynamics. The network has a stable spontaneous activity, and stable delay activity develops after a critical learning stage. The availability of a learning dynamics makes possible a number of experimental predictions for the dependence of the delay activity distributions and the correlations between them, on the learning stage and the learning protocol. In particular it makes specific predictions for pair-associates delay experiments.
引用
收藏
页码:1677 / 1710
页数:34
相关论文
共 50 条
  • [41] DIALOG CONTEXT LANGUAGE MODELING WITH RECURRENT NEURAL NETWORKS
    Liu, Bing
    Lane, Ian
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5715 - 5719
  • [42] Hebbian Learning in Spiking Neural Networks With Nanocrystalline Silicon TFTs and Memristive Synapses
    Cantley, Kurtis D.
    Subramaniam, Anand
    Stiegler, Harvey J.
    Chapman, Richard A.
    Vogel, Eric M.
    IEEE TRANSACTIONS ON NANOTECHNOLOGY, 2011, 10 (05) : 1066 - 1073
  • [43] Synapse-type-specific competitive Hebbian learning forms functional recurrent networks
    Eckmann, Samuel
    Young, Edward James
    Gjorgjieva, Julijana
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (25) : 1 - 12
  • [44] Heuristic learning in recurrent neural fuzzy networks
    Ballini, R
    Gomide, F
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2002, 13 (2-4) : 63 - 74
  • [45] Convergence of diagonal recurrent neural networks' learning
    Wang, P
    Li, YF
    Feng, S
    Wei, W
    PROCEEDINGS OF THE 4TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-4, 2002, : 2365 - 2369
  • [46] Stable reinforcement learning with recurrent neural networks
    Knight J.N.
    Anderson C.
    Journal of Control Theory and Applications, 2011, 9 (3): : 410 - 420
  • [47] Unsupervised learning in LSTM recurrent neural networks
    Klapper-Rybicka, M
    Schraudolph, NN
    Schmidhuber, J
    ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS, 2001, 2130 : 684 - 691
  • [48] Existence and learning of oscillations in recurrent neural networks
    Townley, S
    Ilchmann, A
    Weiss, MG
    Mcclements, W
    Ruiz, AC
    Owens, DH
    Prätzel-Wolters, D
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (01): : 205 - 214
  • [49] Learning Device Models with Recurrent Neural Networks
    Clemens, John
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [50] Stable reinforcement learning with recurrent neural networks
    James Nate KNIGHT
    Charles ANDERSON
    Journal of Control Theory and Applications, 2011, 9 (03) : 410 - 420