Gradient Descent for Spiking Neural Networks

被引:0
|
作者
Huh, Dongsung [1 ]
Sejnowski, Terrence J. [1 ]
机构
[1] Salk Inst Biol Studies, La Jolla, CA 92037 USA
关键词
ERROR-BACKPROPAGATION; RULE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most large-scale network models use neurons with static nonlinearities that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking neural networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking dynamics and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (approximate to millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration (approximate to second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales. In conclusion, our method yields a general purpose supervised learning algorithm for spiking neural networks, which can facilitate further investigations on spike-based computations.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Differentiable hierarchical and surrogate gradient search for spiking neural networks
    Che, Kaiwei
    Leng, Luziwei
    Zhang, Kaixuan
    Zhang, Jianguo
    Meng, Max Q. -H.
    Cheng, Jie
    Guo, Qinghai
    Liao, Jiangxing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [32] Pruning of Deep Spiking Neural Networks through Gradient Rewiring
    Chen, Yanqi
    Yu, Zhaofei
    Fang, Wei
    Huang, Tiejun
    Tian, Yonghong
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1713 - 1721
  • [33] Surrogate gradient scaling for directly training spiking neural networks
    Chen, Tao
    Wang, Shu
    Gong, Yu
    Wang, Lidan
    Duan, Shukai
    APPLIED INTELLIGENCE, 2023, 53 (23) : 27966 - 27981
  • [34] Gradient learning in spiking neural networks by dynamic perturbation of conductances
    Fiete, Ila R.
    Seung, H. Sebastian
    PHYSICAL REVIEW LETTERS, 2006, 97 (04)
  • [35] Surrogate gradient scaling for directly training spiking neural networks
    Tao Chen
    Shu Wang
    Yu Gong
    Lidan Wang
    Shukai Duan
    Applied Intelligence, 2023, 53 : 27966 - 27981
  • [36] Learnable Surrogate Gradient for Direct Training Spiking Neural Networks
    Lian, Shuang
    Shen, Jiangrong
    Liu, Qianhui
    Wang, Ziming
    Yan, Rui
    Tang, Huajin
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3002 - 3010
  • [37] Convergence of Gradient Descent Algorithm for Diagonal Recurrent Neural Networks
    Xu, Dongpo
    Li, Zhengxue
    Wu, Wei
    Ding, Xiaoshuai
    Qu, Di
    2007 SECOND INTERNATIONAL CONFERENCE ON BIO-INSPIRED COMPUTING: THEORIES AND APPLICATIONS, 2007, : 29 - 31
  • [39] Convergence rates for shallow neural networks learned by gradient descent
    Braun, Alina
    Kohler, Michael
    Langer, Sophie
    Walk, Harro
    BERNOULLI, 2024, 30 (01) : 475 - 502
  • [40] Time delay learning by gradient descent in Recurrent Neural Networks
    Boné, R
    Cardot, H
    ARTIFICIAL NEURAL NETWORKS: FORMAL MODELS AND THEIR APPLICATIONS - ICANN 2005, PT 2, PROCEEDINGS, 2005, 3697 : 175 - 180