STATISTICAL DYNAMICS OF LEARNING PROCESSES IN SPIKING NEURAL NETWORKS

被引:0
|
作者
Hyland, David C. [1 ]
机构
[1] Texas A&M Univ, College Stn, TX 77843 USA
来源
关键词
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
In previous work, the author and Dr. Jer-Nan Juang contributed a new neural net architecture, within the framework of "second generation" neural models. We showed how to implement backpropagation learning in a massively parallel architecture involving only local computations - thereby capturing one of the principal advantages of biological neural nets. Since then, a large body of neural-biological research has given rise to the "third-generation" models, namely spiking neural nets, wherein the brief, sharp pulses (spikes) produced by neurons are explicitly modeled. Information is encoded not in average firing rates, but in the temporal pattern of the spikes. Further, no physiological basis for backpropagation has been found, rather, synaptic plasticity is driven by the timing of spikes. The present paper examines the statistical dynamics of learning processes in spiking neural nets. Equations describing the evolution of synaptic efficacies and the probability distributions of the neural states are derived. Although the system is strongly nonlinear, the typically large number of synapses per neuron (similar to 10,000) permits us to obtain a closed system of equations. As in the earlier work, we see that the learning process in this more realistic setting is dominated by local interactions; thereby preserving massive parallelism. It is hoped that the formulation given here will provide the basis for the rigorous analysis of learning dynamics in very large neural nets (10(10) neurons in the human brain!) for which direct simulation is difficult or impractical.
引用
收藏
页码:363 / 378
页数:16
相关论文
共 50 条
  • [21] Temporal Contrastive Learning for Spiking Neural Networks
    Qiu, Haonan
    Song, Zeyin
    Chen, Yanqi
    Ning, Munan
    Fang, Wei
    Sun, Tao
    Ma, Zhengyu
    Yuan, Li
    Tian, Yonghong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT X, 2024, 15025 : 422 - 436
  • [22] A reinforcement learning algorithm for spiking neural networks
    Florian, RV
    Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Proceedings, 2005, : 299 - 306
  • [23] Learning rules in spiking neural networks: A survey
    Yi, Zexiang
    Lian, Jing
    Liu, Qidong
    Zhu, Hegui
    Liang, Dong
    Liu, Jizhao
    NEUROCOMPUTING, 2023, 531 : 163 - 179
  • [24] Supervised Learning in Multilayer Spiking Neural Networks
    Sporea, Ioana
    Gruening, Andre
    NEURAL COMPUTATION, 2013, 25 (02) : 473 - 509
  • [25] Learning in neural networks by reinforcement of irregular spiking
    Xie, XH
    Seung, HS
    PHYSICAL REVIEW E, 2004, 69 (04): : 10
  • [26] Deep Residual Learning in Spiking Neural Networks
    Fang, Wei
    Yu, Zhaofei
    Chen, Yanqi
    Huang, Tiejun
    Masquelier, Timothee
    Tian, Yonghong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [27] A Learning Framework for Controlling Spiking Neural Networks
    Narayanan, Vignesh
    Ritt, Jason T.
    Li, Jr-Shin
    Ching, ShiNung
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 211 - 216
  • [28] STSF: Spiking Time Sparse Feedback Learning for Spiking Neural Networks
    He, Ping
    Xiao, Rong
    Tang, Chenwei
    Huang, Shudong
    Lv, Jiancheng
    Tang, Huajin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [29] Dynamics of Neural Networks with Elapsed Time Model and Learning Processes
    Torres, Nicolas
    Salort, Delphine
    ACTA APPLICANDAE MATHEMATICAE, 2020, 170 (01) : 1065 - 1099
  • [30] Dynamics of Neural Networks with Elapsed Time Model and Learning Processes
    Nicolas Torres
    Delphine Salort
    Acta Applicandae Mathematicae, 2020, 170 : 1065 - 1099