Gradient Descent for Spiking Neural Networks

被引:0
|
作者
Huh, Dongsung [1 ]
Sejnowski, Terrence J. [1 ]
机构
[1] Salk Inst Biol Studies, La Jolla, CA 92037 USA
关键词
ERROR-BACKPROPAGATION; RULE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most large-scale network models use neurons with static nonlinearities that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking neural networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking dynamics and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (approximate to millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration (approximate to second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales. In conclusion, our method yields a general purpose supervised learning algorithm for spiking neural networks, which can facilitate further investigations on spike-based computations.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A gradient descent learning algorithm for fuzzy neural networks
    Feuring, T
    Buckley, JJ
    Hayashi, Y
    1998 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AT THE IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE - PROCEEDINGS, VOL 1-2, 1998, : 1136 - 1141
  • [22] Generalization Guarantees of Gradient Descent for Shallow Neural Networks
    Wang, Puyu
    Lei, Yunwen
    Wang, Di
    Ying, Yiming
    Zhou, Ding-Xuan
    NEURAL COMPUTATION, 2025, 37 (02) : 344 - 402
  • [23] Convergence of gradient descent for learning linear neural networks
    Nguegnang, Gabin Maxime
    Rauhut, Holger
    Terstiege, Ulrich
    ADVANCES IN CONTINUOUS AND DISCRETE MODELS, 2024, 2024 (01):
  • [24] Optimization of Graph Neural Networks with Natural Gradient Descent
    Izadi, Mohammad Rasool
    Fang, Yihao
    Stevenson, Robert
    Lin, Lizhen
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 171 - 179
  • [25] Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
    Zhou, XueFei
    2ND INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2018), 2018, 1004
  • [26] Neural Networks can Learn Representations with Gradient Descent
    Damian, Alex
    Lee, Jason D.
    Soltanolkotabi, Mahdi
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [27] Fractional-order spike-timing-dependent gradient descent for multi-layer spiking neural networks
    Yang, Yi
    Voyles, Richard M.
    Zhang, Haiyan H.
    Nawrocki, Robert A.
    NEUROCOMPUTING, 2025, 611
  • [28] Training Spiking ConvNets by STDP and Gradient Descent
    Tavanaei, Amirhossein
    Kirby, Zachary
    Maida, Anthony S.
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [29] Natural gradient enables fast sampling in spiking neural networks
    Masset, Paul
    Zavatone-Veth, Jacob A.
    Connor, J. Patrick
    Murthy, Venkatesh N.
    Pehlevan, Cengiz
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [30] GRADUAL SURROGATE GRADIENT LEARNING IN DEEP SPIKING NEURAL NETWORKS
    Chen, Yi
    Zhang, Silin
    Ren, Shiyu
    Qu, Hong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8927 - 8931