Gradient Descent for Spiking Neural Networks

被引:0
|
作者
Huh, Dongsung [1 ]
Sejnowski, Terrence J. [1 ]
机构
[1] Salk Inst Biol Studies, La Jolla, CA 92037 USA
关键词
ERROR-BACKPROPAGATION; RULE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most large-scale network models use neurons with static nonlinearities that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking neural networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking dynamics and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (approximate to millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration (approximate to second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales. In conclusion, our method yields a general purpose supervised learning algorithm for spiking neural networks, which can facilitate further investigations on spike-based computations.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks
    Zhang, Guodong
    Martens, James
    Grosse, Roger
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [42] Gradient Descent Analysis: On Visualizing the Training of Deep Neural Networks
    Becker, Martin
    Lippel, Jens
    Zielke, Thomas
    PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL 3: IVAPP, 2019, : 338 - 345
  • [43] Learning dynamics of gradient descent optimization in deep neural networks
    Wu, Wei
    Jing, Xiaoyuan
    Du, Wencai
    Chen, Guoliang
    SCIENCE CHINA-INFORMATION SCIENCES, 2021, 64 (05)
  • [44] Gradient Descent Finds Global Minima of Deep Neural Networks
    Du, Simon S.
    Lee, Jason D.
    Li, Haochuan
    Wang, Liwei
    Zhai, Xiyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [45] Gradient descent learning of radial-basis neural networks
    Karayiannis, NB
    1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, 1997, : 1815 - 1820
  • [46] Learning dynamics of gradient descent optimization in deep neural networks
    Wei Wu
    Xiaoyuan Jing
    Wencai Du
    Guoliang Chen
    Science China Information Sciences, 2021, 64
  • [47] Symmetry-guided gradient descent for quantum neural networks
    Bian, Kaiming
    Zhang, Shitao
    Meng, Fei
    Zhang, Wen
    Dahlsten, Oscar
    PHYSICAL REVIEW A, 2024, 110 (02)
  • [48] A GEOMETRIC APPROACH OF GRADIENT DESCENT ALGORITHMS IN LINEAR NEURAL NETWORKS
    Chitour, Yacine
    Liao, Zhenyu
    Couillet, Romain
    MATHEMATICAL CONTROL AND RELATED FIELDS, 2023, 13 (03) : 918 - 945
  • [49] Reformulated radial basis neural networks trained by gradient descent
    Karayiannis, NB
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (03): : 657 - 671
  • [50] Training Neural Networks by Time-Fractional Gradient Descent
    Xie, Jingyi
    Li, Sirui
    AXIOMS, 2022, 11 (10)