Metaplasticity and memory in multilevel recurrent feed-forward networks

被引:0
|
作者
Zanardi, Gianmarco [1 ,2 ]
Bettotti, Paolo [1 ]
Morand, Jules [1 ,2 ]
Pavesi, Lorenzo [1 ]
Tubiana, Luca [1 ,2 ]
机构
[1] Univ Trento, Phys Dept, Via Sommar 14, I-38123 Trento, Italy
[2] Trento Inst Fundamental Phys & Applicat, INFN TIFPA, I-38123 Trento, Italy
基金
欧洲研究理事会;
关键词
MODELS;
D O I
10.1103/PhysRevE.110.054304
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Network systems can exhibit memory effects in which the interactions between different pairs of nodes adapt in time, leading to the emergence of preferred connections, patterns, and subnetworks. To a first approximation, this memory can be modeled through a "plastic" Hebbian or homophily mechanism, in which edges get reinforced proportionally to the amount of information flowing through them. However, recent studies on glia-neuron networks have highlighted how memory can evolve due to more complex dynamics, including multilevel network structures and "metaplastic" effects that modulate reinforcement. Inspired by those systems, here we develop a simple and general model for the dynamics of an adaptive network with an additional metaplastic mechanism that varies the rate of Hebbian strengthening of its edge connections. The metaplastic term acts on a second network level in which edges are grouped together, simulating local, longer timescale effects. Specifically, we consider a biased random walk on a cyclic feed-forward network. The random walk chooses its steps according to the weights of the network edges. The weights evolve through a Hebbian mechanism modulated by a metaplastic reinforcement, biasing the walker to prefer edges that have been already explored. We study the dynamical emergence (memorization) of preferred paths and their retrieval and identify three regimes: one dominated by the Hebbian term, one in which the metareinforcement drives memory formation, and a balanced one. We show that, in the latter two regimes, metareinforcement allows the retrieval of a previously stored path even after the weights have been reset to zero to erase Hebbian memory.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] The lifting bifurcation problem on feed-forward networks
    Soares, Pedro
    NONLINEARITY, 2018, 31 (12) : 5500 - 5535
  • [42] The ground state of cortical feed-forward networks
    Tetzlaff, T
    Geisel, T
    Diesmann, M
    NEUROCOMPUTING, 2002, 44 : 673 - 678
  • [43] Comparative Study of Feed-Forward and Recurrent Neural Networks in Modeling of Electron Beam Welding
    Jaypuria, Sanjib
    Gupta, Santosh Kumar
    Pratihar, Dilip Kumar
    ADVANCES IN ADDITIVE MANUFACTURING AND JOINING, AIMTDR 2018, 2020, : 521 - 531
  • [44] On the Duality Between Belief Networks and Feed-Forward Neural Networks
    Baggenstoss, Paul M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (01) : 190 - 200
  • [45] Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks
    Patan, Krzysztof
    NEURAL NETWORKS, 2008, 21 (01) : 59 - 64
  • [46] Ear recognition with feed-forward artificial neural networks
    Sibai, Fadi N.
    Nuaimi, Amna
    Maamari, Amna
    Kuwair, Rasha
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (05): : 1265 - 1273
  • [47] Catalytic feed-forward explosive synchronization in multilayer networks
    Rathore, Vasundhara
    Kachhvah, Ajay Deep
    Jalan, Sarika
    CHAOS, 2021, 31 (12) : 123130
  • [48] Searching for Tight Performance Bounds in Feed-Forward Networks
    Kiefer, Andreas
    Gollan, Nicos
    Schmitt, Jens B.
    MEASUREMENT, MODELLING, AND EVALUATION OF COMPUTING SYSTEMS AND DEPENDABILITY AND FAULT TOLERANCE, 2010, 5987 : 227 - 241
  • [49] Learning feed-forward and recurrent fuzzy systems: A genetic approach
    Surmann, H
    Maniadakis, M
    JOURNAL OF SYSTEMS ARCHITECTURE, 2001, 47 (07) : 649 - 662
  • [50] Probabilistic robustness estimates for feed-forward neural networks
    Couellan, Nicolas
    NEURAL NETWORKS, 2021, 142 : 138 - 147