Gradient Descent for Spiking Neural Networks

被引:0
|
作者
Huh, Dongsung [1 ]
Sejnowski, Terrence J. [1 ]
机构
[1] Salk Inst Biol Studies, La Jolla, CA 92037 USA
关键词
ERROR-BACKPROPAGATION; RULE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most large-scale network models use neurons with static nonlinearities that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking neural networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking dynamics and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (approximate to millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration (approximate to second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales. In conclusion, our method yields a general purpose supervised learning algorithm for spiking neural networks, which can facilitate further investigations on spike-based computations.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Fractional Gradient Descent Method for Spiking Neural Networks
    Yang, Honggang
    Chen, Jiejie
    Jiang, Ping
    Xu, Mengfei
    Zhao, Haiming
    2023 2ND CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, CFASTA, 2023, : 636 - 641
  • [2] Smooth Exact Gradient Descent Learning in Spiking Neural Networks
    Klos, Christian
    Memmesheimer, Raoul-Martin
    PHYSICAL REVIEW LETTERS, 2025, 134 (02)
  • [3] Meta-learning spiking neural networks with surrogate gradient descent
    Stewart, Kenneth M.
    Neftci, Emre O.
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2022, 2 (04):
  • [4] Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks
    Li, Yuhang
    Guo, Yufei
    Zhang, Shanghang
    Deng, Shikuang
    Hai, Yongqing
    Gu, Shi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] An approximate gradient descent algorithm for Spiking Neural Network
    Chen, Wenjie
    Li, Chuandong
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 4690 - 4694
  • [6] A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks
    Xu, Yan
    Zeng, Xiaoqin
    Han, Lixin
    Yang, Jing
    NEURAL NETWORKS, 2013, 43 : 99 - 113
  • [7] One-Pass Online Learning Based on Gradient Descent for Multilayer Spiking Neural Networks
    Lin, Xianghong
    Hu, Tiandou
    Wang, Xiangwen
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (01) : 16 - 31
  • [8] INVERSION OF NEURAL NETWORKS BY GRADIENT DESCENT
    KINDERMANN, J
    LINDEN, A
    PARALLEL COMPUTING, 1990, 14 (03) : 277 - 286
  • [9] Trapezoidal Gradient Descent for Effective Reinforcement Learning in Spiking Networks
    Pan, Yuhao
    Wang, Xiucheng
    Cheng, Nan
    Qiu, Qi
    2024 INTERNATIONAL CONFERENCE ON UBIQUITOUS COMMUNICATION, UCOM 2024, 2024, : 192 - 196
  • [10] Sparse Spiking Gradient Descent
    Perez-Nieves, Nicolas
    Goodman, Dan F. M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,