Macroscopic dynamics of neural networks with heterogeneous spiking thresholds

被引:10
|
作者
Gast, Richard [1 ]
Solla, Sara A. [1 ]
Kennedy, Ann [1 ]
机构
[1] Northwestern Univ, Feinberg Sch Med, Dept Neurosci, Chicago, IL 60611 USA
关键词
MODEL; INHIBITION;
D O I
10.1103/PhysRevE.107.024306
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Mean-field theory links the physiological properties of individual neurons to the emergent dynamics of neural population activity. These models provide an essential tool for studying brain function at different scales; however, for their application to neural populations on large scale, they need to account for differences between distinct neuron types. The Izhikevich single neuron model can account for a broad range of different neuron types and spiking patterns, thus rendering it an optimal candidate for a mean-field theoretic treatment of brain dynamics in heterogeneous networks. Here we derive the mean-field equations for networks of all-to-all coupled Izhikevich neurons with heterogeneous spiking thresholds. Using methods from bifurcation theory, we examine the conditions under which the mean-field theory accurately predicts the dynamics of the Izhikevich neuron network. To this end, we focus on three important features of the Izhikevich model that are subject here to simplifying assumptions: (i) spike-frequency adaptation, (ii) the spike reset conditions, and (iii) the distribution of single-cell spike thresholds across neurons. Our results indicate that, while the mean-field model is not an exact model of the Izhikevich network dynamics, it faithfully captures its different dynamic regimes and phase transitions. We thus present a mean-field model that can represent different neuron types and spiking dynamics. The model comprises biophysical state variables and parameters, incorporates realistic spike resetting conditions, and accounts for heterogeneity in neural spiking thresholds. These features allow for a broad applicability of the model as well as for a direct comparison to experimental data.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Biologically Inspired Dynamic Thresholds for Spiking Neural Networks
    Ding, Jianchuan
    Dong, Bo
    Heide, Felix
    Ding, Yufei
    Zhou, Yunduo
    Yin, Baocai
    Yang, Xin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] Designing the dynamics of spiking neural networks
    Memmesheimer, Raoul-Martin
    Timme, Marc
    PHYSICAL REVIEW LETTERS, 2006, 97 (18)
  • [3] Macroscopic phase-resetting curves for spiking neural networks
    Dumont, Gregory
    Ermentrout, G. Bard
    Gutkin, Boris
    PHYSICAL REVIEW E, 2017, 96 (04)
  • [4] Macroscopic dynamics in separable neural networks
    Chen, YH
    Wang, YH
    Yang, KQ
    PHYSICAL REVIEW E, 2001, 63 (04): : 419011 - 419014
  • [5] Neural activity of heterogeneous inhibitory spiking networks with delay
    Luccioli, Stefano
    Angulo-Garcia, David
    Torcini, Alessandro
    PHYSICAL REVIEW E, 2019, 99 (05)
  • [6] Federated Learning with Spiking Neural Networks in Heterogeneous Systems
    Tumpa, Sadia Anjum
    Singh, Sonali
    Khan, Md Fahim Faysal
    Kandemir, Mahmut Tylan
    Narayanan, Vijaykrishnan
    Das, Chita R.
    2023 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI, ISVLSI, 2023, : 49 - 54
  • [7] Spiking Neural P Systems with Thresholds
    Zeng, Xiangxiang
    Zhang, Xingyi
    Song, Tao
    Pan, Linqiang
    NEURAL COMPUTATION, 2014, 26 (07) : 1340 - 1361
  • [8] STATISTICAL DYNAMICS OF LEARNING PROCESSES IN SPIKING NEURAL NETWORKS
    Hyland, David C.
    JER-NAN JUANG ASTRODYNAMICS SYMPOSIUM, 2013, 147 : 363 - 378
  • [9] Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks
    de Santos-Sierra, Daniel
    Sanchez-Jimenez, Abel
    Garcia-Vellisca, Mariano A.
    Navas, Adrian
    Villacorta-Atienza, Jose A.
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2015, 9
  • [10] Exploring Temporal Information Dynamics in Spiking Neural Networks
    Kim, Youngeun
    Li, Yuhang
    Park, Hyoungseob
    Venkatesha, Yeshwanth
    Hambitzer, Anna
    Panda, Priyadarshini
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8308 - 8316