Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

被引:65
|
作者
Meng, Qingyan [1 ,2 ]
Xiao, Mingqing [3 ]
Yan, Shen [4 ]
Wang, Yisen [3 ,5 ]
Lin, Zhouchen [3 ,5 ,6 ]
Luo, Zhi-Quan [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Shenzhen, Peoples R China
[2] Shenzhen Res Inst Big Data, Shenzhen, Peoples R China
[3] Peking Univ, Sch Artificial Intelligence, Key Lab Machine Percept MoE, Beijing, Peoples R China
[4] Peking Univ, Ctr Data Sci, Beijing, Peoples R China
[5] Peking Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[6] Peng Cheng Lab, Shenzhen, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.01212
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware. However, it is a challenge to efficiently train SNNs due to their non-differentiability. Most existing methods either suffer from high latency (i.e., long simulation time steps), or cannot achieve as high performance as Artificial Neural Networks (ANNs). In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency. First, we encode the spike trains into spike representation using (weighted) firing rate coding. Based on the spike representation, we systematically derive that the spiking dynamics with common neural models can be represented as some sub-differentiable mapping. With this viewpoint, our proposed DSR method trains SNNs through gradients of the mapping and avoids the common non-differentiability problem in SNN training. Then we analyze the error when representing the specific mapping with the forward computation of the SNN. To reduce such error, we propose to train the spike threshold in each layer, and to introduce a new hyperparameter for the neural models. With these components, the DSR method can achieve state-of-the-art SNN performance with low latency on both static and neuromorphic datasets, including CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10.
引用
收藏
页码:12434 / 12443
页数:10
相关论文
共 50 条
  • [21] Direct training high-performance deep spiking neural networks: a review of theories and methods
    Zhou, Chenlin
    Zhang, Han
    Yu, Liutao
    Ye, Yumin
    Zhou, Zhaokun
    Huang, Liwei
    Ma, Zhengyu
    Fan, Xiaopeng
    Zhou, Huihui
    Tian, Yonghong
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [22] DPSNN: spiking neural network for low-latency streaming speech enhancement
    Sun, Tao
    Bohte, Sander
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2024, 4 (04):
  • [23] High-performance deep spiking neural networks via at-most-two-spike exponential coding
    Chen, Yunhua
    Feng, Ren
    Xiong, Zhimin
    Xiao, Jinsheng
    Liu, Jian K.
    NEURAL NETWORKS, 2024, 176
  • [24] Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation
    Hwang, Sungmin
    Chang, Jeesoo
    Oh, Min-Hye
    Min, Kyung Kyu
    Jang, Taejin
    Park, Kyungchul
    Yu, Junsu
    Lee, Jong-Ho
    Park, Byung-Gook
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [25] Spikeformer: Training high-performance spiking neural network with transformer
    Li, Yudong
    Lei, Yunlin
    Yang, Xu
    NEUROCOMPUTING, 2024, 574
  • [26] Hierarchical system synchronization and signaling for high-performance -: Low-latency interconnects
    Mueller, Peter
    Bapst, Urs
    Luijten, Ronald
    2005 IEEE INTERNATIONAL CONFERENCE ON ELECTRO/INFORMATION TECHNOLOGY (EIT 2005), 2005, : 145 - 150
  • [27] A Low-Latency and High-Performance SCL Decoder with Frame-Interleaving
    Zhang, Leyu
    Ren, Yuqing
    Shen, Yifei
    Zhou, Wuyang
    Balatsoukas-Stimming, Alexios
    Zhang, Chuan
    Burg, Andreas
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [28] Amortized Neural Networks for Low-Latency Speech Recognition
    Macoskey, Jonathan
    Strimel, Grant P.
    Su, Jinru
    Rastrow, Ariya
    INTERSPEECH 2021, 2021, : 4558 - 4562
  • [29] Low Latency Spiking ConvNets with Restricted Output Training and False Spike Inhibition
    Chen, Ruizhi
    Ma, Hong
    Guo, Peng
    Xie, Shaolin
    Li, Pin
    Wang, Donglin
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 404 - 411
  • [30] Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks
    Islam, Riadul
    Majurski, Patrick
    Kwon, Jun
    Sharma, Anurag
    Tummala, Sri Ranga Sai Krishna
    SENSORS, 2024, 24 (04)