Spiking representation learning for associative memories

被引:1
|
作者
Ravichandran, Naresh [1 ]
Lansner, Anders [1 ,2 ]
Herman, Pawel [1 ,3 ,4 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Dept Computat Sci & Technol, Computat Cognit Brain Sci Grp, Stockholm, Sweden
[2] Stockholm Univ, Dept Math, Stockholm, Sweden
[3] KTH Royal Inst Technol, Digital Futures, Stockholm, Sweden
[4] Swedish E Sci Res Ctr SeRC, Stockholm, Sweden
基金
瑞典研究理事会;
关键词
spiking neural networks; associative memory; attractor dynamics; Hebbian learning; structural plasticity; BCPNN; representation learning; unsupervised learning; STRUCTURAL PLASTICITY; SILENT SYNAPSES; NEURAL-NETWORKS; NEURONAL CIRCUITS; MODELS; DYNAMICS; COMPLETION; PRINCIPLES; SEPARATION; CORTEX;
D O I
10.3389/fnins.2024.1439414
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (similar to 1 Hz mean and similar to 100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] A LEARNING AND FORGETTING ALGORITHM IN ASSOCIATIVE MEMORIES - THE EIGENSTRUCTURE METHOD
    YEN, G
    MICHEL, AN
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 1992, 39 (04) : 212 - 225
  • [22] OPTIMIZING SYNAPTIC LEARNING RULES IN LINEAR ASSOCIATIVE MEMORIES
    DAYAN, P
    WILLSHAW, DJ
    BIOLOGICAL CYBERNETICS, 1991, 65 (04) : 253 - 265
  • [23] ASSOCIATIVE MEMORIES
    KRIGMAN, A
    INSTRUMENTS & CONTROL SYSTEMS, 1972, 45 (03): : 35 - +
  • [24] Sparse representation and associative learning in multisensory integration
    Zhang, LQ
    PROCEEDINGS OF THE 2005 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS AND BRAIN, VOLS 1-3, 2005, : 1622 - 1626
  • [25] LEARNING ASSOCIATIVE REPRESENTATION FOR FACIAL EXPRESSION RECOGNITION
    Du, Yangtao
    Yang, Dingkang
    Zhai, Peng
    Li, Mingchen
    Zhang, Lihua
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 889 - 893
  • [26] Statistical mechanics of learning via reverberation in bidirectional associative memories
    Centonze, Martino Salomone
    Kanter, Ido
    Barra, Adriano
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 637
  • [27] Simplification Of The Learning Phase In The Alpha-Beta Associative Memories
    Edgar A, Catalan Salgado
    Yanez-Marquez, Cornelio
    Amadeo Jose, Argueelles Cruz
    CERMA 2008: ELECTRONICS, ROBOTICS AND AUTOMOTIVE MECHANICS CONFERENCE, PROCEEDINGS, 2008, : 428 - +
  • [28] Attractor networks and associative memories with STDP learning in RRAM synapses
    Milo, V.
    Ielmini, D.
    Chicca, E.
    2017 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2017,
  • [29] Associative Learning: How nitric oxide helps update memories
    Green, Daniel J. E.
    Lin, Andrew C.
    ELIFE, 2020, 9
  • [30] Variance of covariance rules for associative matrix memories and reinforcement learning
    Dayart, Peter
    Sejnowski, Terrence J.
    Neural Computation, 1993, 5 (02)