Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

被引:65
|
作者
Meng, Qingyan [1 ,2 ]
Xiao, Mingqing [3 ]
Yan, Shen [4 ]
Wang, Yisen [3 ,5 ]
Lin, Zhouchen [3 ,5 ,6 ]
Luo, Zhi-Quan [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Shenzhen, Peoples R China
[2] Shenzhen Res Inst Big Data, Shenzhen, Peoples R China
[3] Peking Univ, Sch Artificial Intelligence, Key Lab Machine Percept MoE, Beijing, Peoples R China
[4] Peking Univ, Ctr Data Sci, Beijing, Peoples R China
[5] Peking Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[6] Peng Cheng Lab, Shenzhen, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.01212
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware. However, it is a challenge to efficiently train SNNs due to their non-differentiability. Most existing methods either suffer from high latency (i.e., long simulation time steps), or cannot achieve as high performance as Artificial Neural Networks (ANNs). In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency. First, we encode the spike trains into spike representation using (weighted) firing rate coding. Based on the spike representation, we systematically derive that the spiking dynamics with common neural models can be represented as some sub-differentiable mapping. With this viewpoint, our proposed DSR method trains SNNs through gradients of the mapping and avoids the common non-differentiability problem in SNN training. Then we analyze the error when representing the specific mapping with the forward computation of the SNN. To reduce such error, we propose to train the spike threshold in each layer, and to introduce a new hyperparameter for the neural models. With these components, the DSR method can achieve state-of-the-art SNN performance with low latency on both static and neuromorphic datasets, including CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10.
引用
收藏
页码:12434 / 12443
页数:10
相关论文
共 50 条
  • [11] Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
    Chowdhury, Sayeed Shafayet
    Garg, Isha
    Roy, Kaushik
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [12] Toward High-Accuracy and Low-Latency Spiking Neural Networks With Two-Stage Optimization
    Wang, Ziming
    Zhang, Yuhao
    Lian, Shuang
    Cui, Xiaoxin
    Yan, Rui
    Tang, Huajin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 15
  • [13] Energy efficient and low-latency spiking neural networks on embedded microcontrollers through spiking activity tuning
    Francesco Barchi
    Emanuele Parisi
    Luca Zanatta
    Andrea Bartolini
    Andrea Acquaviva
    Neural Computing and Applications, 2024, 36 (30) : 18897 - 18917
  • [14] Future Low-latency Networks for High Performance Computing
    Koibuchi, Michihiro
    2013 FIRST INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2013, : 22 - 23
  • [15] IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation
    Fan, Xiongfei
    Zhang, Hong
    Zhang, Yu
    BIOMIMETICS, 2023, 8 (04)
  • [16] Direct training high-performance spiking neural networks for object recognition and detection
    Zhang, Hong
    Li, Yang
    He, Bin
    Fan, Xiongfei
    Wang, Yue
    Zhang, Yu
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [17] Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks
    Wu, Yujie
    Deng, Lei
    Li, Guoqi
    Zhu, Jun
    Shi, Luping
    FRONTIERS IN NEUROSCIENCE, 2018, 12
  • [18] Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks
    Qiu, Xuerui
    Zhu, Rui-Jie
    Chou, Yuhong
    Wang, Zhaorui
    Deng, Liang-Jian
    Li, Guoqi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 601 - 610
  • [19] Shrinking Your TimeStep: Towards Low-Latency Neuromorphic Object Recognition with Spiking Neural Networks
    Ding, Yongqi
    Zuo, Lin
    Jing, Mengmeng
    He, Pei
    Xiao, Yongjun
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11811 - 11819
  • [20] An Improved STBP for Training High-Accuracy and Low-Spike-Count Spiking Neural Networks
    Tan, Pai-Yu
    Wu, Cheng-Wen
    Lu, Juin-Ming
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 575 - 580