Smooth Exact Gradient Descent Learning in Spiking Neural Networks

被引:0
|
作者
Klos, Christian [1 ]
Memmesheimer, Raoul-Martin [1 ]
机构
[1] Univ Bonn, Inst Genet, Neural Network Dynam & Computat, D-53115 Bonn, Germany
关键词
ERROR-BACKPROPAGATION; NEURONS; SIMULATION; SPARSE; CHAOS; MODEL; FIRE;
D O I
10.1103/PhysRevLett.134.027301
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Gradient descent prevails in artificial neural network training, but seems inept for spiking neural networks as small parameter changes can cause sudden, disruptive appearances and disappearances of spikes. Here, we demonstrate exact gradient descent based on continuously changing spiking dynamics. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where it cannot influence subsequent dynamics. This also enables gradient-based spike addition and removal. We illustrate our scheme with various tasks and setups, including recurrent and deep, initially silent networks.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] An online supervised learning method based on gradient descent for spiking neurons
    Xu, Yan
    Yang, Jing
    Zhong, Shuiming
    NEURAL NETWORKS, 2017, 93 : 7 - 20
  • [42] Normative learning in spiking neural networks
    Jolivet, Renaud B.
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2024, 59 : 454 - 455
  • [43] Surrogate Module Learning: Reduce the Gradient Error Accumulation in Training Spiking Neural Networks
    Deng, Shikuang
    Lin, Hao
    Li, Yuhang
    Gu, Shi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [44] The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks
    Zenke, Friedemann
    Vogels, Tim P.
    NEURAL COMPUTATION, 2021, 33 (04) : 899 - 925
  • [45] Fractional-order gradient descent learning of BP neural networks with Caputo derivative
    Wang, Jian
    Wen, Yanqing
    Gou, Yida
    Ye, Zhenyun
    Chen, Hua
    NEURAL NETWORKS, 2017, 89 : 19 - 30
  • [46] Optimization of learning process for Fourier series neural networks using gradient descent algorithm
    Halawa, Krzysztof
    PRZEGLAD ELEKTROTECHNICZNY, 2008, 84 (06): : 128 - 130
  • [47] Impact of Mathematical Norms on Convergence of Gradient Descent Algorithms for Deep Neural Networks Learning
    Cai, Linzhe
    Yu, Xinghuo
    Li, Chaojie
    Eberhard, Andrew
    Lien Thuy Nguyen
    Chuong Thai Doan
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 131 - 144
  • [48] Gradient descent learning algorithm for hierarchical neural networks: A case study in industrial quality
    Baratta, D
    Diotalevi, F
    Valle, M
    Caviglia, DD
    ENGINEERING APPLICATIONS OF BIO-INSPIRED ARTIFICIAL NEURAL NETWORKS, VOL II, 1999, 1607 : 578 - 587
  • [49] Theoretical analysis of batch and on-line training for gradient descent learning in neural networks
    Nakama, Takehiko
    NEUROCOMPUTING, 2009, 73 (1-3) : 151 - 159
  • [50] Using Particle Swarm Optimization with Gradient Descent for Parameter Learning in Convolutional Neural Networks
    Wessels, Steven
    van der Haar, Dustin
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2021, 2021, 12702 : 119 - 128