STSF: Spiking Time Sparse Feedback Learning for Spiking Neural Networks

被引:0
|
作者
He, Ping [1 ,2 ]
Xiao, Rong [1 ,2 ]
Tang, Chenwei [1 ,2 ]
Huang, Shudong [1 ,2 ]
Lv, Jiancheng [1 ,2 ]
Tang, Huajin [3 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Peoples R China
[2] Minist Educ, Engn Res Ctr Machine Learning & Ind Intelligence, Chengdu 610065, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Global-local spiking learning; sparse direct feedback alignment (DFA); spiking neural networks (SNNs); vanilla spike-timing-dependent plasticity (STDP); OPTIMIZATION; PLASTICITY; NEURONS;
D O I
10.1109/TNNLS.2025.3527700
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) are biologically plausible models known for their computational efficiency. A significant advantage of SNNs lies in the binary information transmission through spike trains, eliminating the need for multiplication operations. However, due to the spatio-temporal nature of SNNs, direct application of traditional backpropagation (BP) training still results in significant computational costs. Meanwhile, learning methods based on unsupervised synaptic plasticity provide an alternative for training SNNs but often yield suboptimal results. Thus, efficiently training high-accuracy SNNs remains a challenge. In this article, we propose a highly efficient and biologically plausible spiking time sparse feedback (STSF) learning method. This algorithm modifies synaptic weights by incorporating a neuromodulator for global supervised learning using sparse direct feedback alignment (DFA) and local homeostasis learning with vanilla spike-timing-dependent plasticity (STDP). Such neuromorphic global-local learning focuses on instantaneous synaptic activity, enabling independent and simultaneous optimization of each network layer, thereby improving biological plausibility, enhancing parallelism, and reducing storage overhead. Incorporating sparse fixed random feedback connections for global error modulation, which uses selection operations instead of multiplication operations, further improves computational efficiency. Experimental results demonstrate that the proposed algorithm markedly reduces the computational cost with significantly higher accuracy comparable to current state-of-the-art algorithms across a wide range of classification tasks. Our implementation codes are available at https://github.com/hppeace/STSF.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Sparse Computation in Adaptive Spiking Neural Networks
    Zambrano, Davide
    Nusselder, Roeland
    Scholte, H. Steven
    Bohte, Sander M.
    FRONTIERS IN NEUROSCIENCE, 2019, 12
  • [2] Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning
    Shen, Jiangrong
    Ni, Wenyao
    Xu, Qi
    Tang, Huajin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 611 - 619
  • [3] Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks
    Cordone, Loic
    Miramond, Benoit
    Ferrante, Sonia
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [4] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [5] Learning algorithm for spiking neural networks
    Amin, HH
    Fujii, RH
    ADVANCES IN NATURAL COMPUTATION, PT 1, PROCEEDINGS, 2005, 3610 : 456 - 465
  • [6] Federated Learning With Spiking Neural Networks
    Venkatesha, Yeshwanth
    Kim, Youngeun
    Tassiulas, Leandros
    Panda, Priyadarshini
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 6183 - 6194
  • [7] Supervised learning with spiking neural networks
    Xin, JG
    Embrechts, MJ
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 1772 - 1777
  • [8] Efficient learning in spiking neural networks
    Rast, Alexander
    Aoun, Mario Antoine
    Elia, Eleni G.
    Crook, Nigel
    NEUROCOMPUTING, 2024, 597
  • [9] Normative learning in spiking neural networks
    Jolivet, Renaud B.
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2024, 59 : 454 - 455
  • [10] Learning in Recurrent Spiking Neural Networks with Sparse Full-FORCE Training
    Paul, Ankita
    Das, Anup
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT X, 2024, 15025 : 365 - 376