Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks

被引:12
|
作者
Sommer, Jan [1 ]
Ozkan, M. Akif [1 ]
Keszocze, Oliver [2 ]
Teich, Juergen [2 ]
机构
[1] Friedrich Alexander Univ Erlangen Nurnberg, Chair Hardware Software Codesign, D-91058 Erlangen, Germany
[2] Max Planck Inst Sci Light, D-91058 Erlangen, Germany
关键词
Event-based processing; field-programmable gate array (FPGA); hardware acceleration; spiking convolutional neural networks (SNNs);
D O I
10.1109/TCAD.2022.3197512
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Spiking neural networks (SNNs) compute in an event-based manner to achieve a more efficient computation than standard neural networks. In SNNs, neuronal outputs are not encoded as real-valued activations but as sequences of binary spikes. The motivation of using SNNs over conventional neural networks is rooted in the special computational aspects of spike-based processing, especially the high degree of sparsity of spikes. Well-established implementations of convolutional neural networks (CNNs) feature large spatial arrays of processing elements (PEs) that remain highly underutilized in the face of activation sparsity. We propose a novel architecture optimized for the processing of convolutional SNNs (CSNNs) featuring a high degree of sparsity. The proposed architecture consists of an array of PEs of the size of the kernel of a convolution and an intelligent spike queue that provides a high PE utilization. A constant flow of spikes is ensured by compressing the feature maps into queues that can then be processed spike-by-spike. This compression is performed at run-time, leading to a self-timed schedule. This allows the processing time to scale with the number of spikes. Also, a novel memory organization scheme is introduced to efficiently store and retrieve the membrane potentials of the individual neurons using multiple small parallel on-chip RAMs. Each RAM is hardwired to its PE, reducing switching circuitry. We implemented the proposed architecture on an FPGA and achieved a significant speedup compared to previously proposed SNN implementations (similar to 10 times) while needing less hardware resources and maintaining a higher energy efficiency (similar to 15 times).
引用
收藏
页码:3767 / 3778
页数:12
相关论文
共 50 条
  • [21] Deterministic conversion rule for CNNs to efficient spiking convolutional neural networks
    Xu Yang
    Zhongxing Zhang
    Wenping Zhu
    Shuangming Yu
    Liyuan Liu
    Nanjian Wu
    Science China Information Sciences, 2020, 63
  • [22] Efficient Hardware Acceleration of Spiking Neural Networks using FPGA: Towards Real-Time Edge Neuromorphic Computing
    El Maachi, Soukaina
    Chehri, Abdellah
    Saadane, Rachid
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [23] Deterministic conversion rule for CNNs to efficient spiking convolutional neural networks
    Yang, Xu
    Zhang, Zhongxing
    Zhu, Wenping
    Yu, Shuangming
    Liu, Liyuan
    Wu, Nanjian
    SCIENCE CHINA-INFORMATION SCIENCES, 2020, 63 (02)
  • [24] Accurate and Efficient Stochastic Computing Hardware for Convolutional Neural Networks
    Yu, Joonsang
    Kim, Kyounghoon
    Lee, Jongeun
    Choi, Kiyoung
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 105 - 112
  • [25] Sparsity Enables Data and Energy Efficient Spiking Convolutional Neural Networks
    Bhatt, Varun
    Ganguly, Udayan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 263 - 272
  • [26] Deterministic conversion rule for CNNs to efficient spiking convolutional neural networks
    Xu YANG
    Zhongxing ZHANG
    Wenping ZHU
    Shuangming YU
    Liyuan LIU
    Nanjian WU
    Science China(Information Sciences), 2020, 63 (02) : 196 - 214
  • [27] An Efficient Hardware Accelerator for Sparse Convolutional Neural Networks on FPGAs
    Lu, Liqiang
    Xie, Jiaming
    Huang, Ruirui
    Zhang, Jiansong
    Lin, Wei
    Liang, Yun
    2019 27TH IEEE ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM), 2019, : 17 - 25
  • [28] DSP-Efficient Hardware Acceleration of Convolutional Neural Network Inference on FPGAs
    Wang, Dong
    Xu, Ke
    Guo, Jingning
    Ghiasi, Soheil
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (12) : 4867 - 4880
  • [29] A Comprehensive Review of Hardware Acceleration Techniques and Convolutional Neural Networks for EEG Signals
    Xie, Yu
    Oniga, Stefan
    SENSORS, 2024, 24 (17)
  • [30] Towards Hardware-Software Self-Adaptive Acceleration of Spiking Neural Networks on Reconfigurable Digital Hardware
    Pachideh, Brian
    Zielke, Christian
    Nitzsche, Sven
    Becker, Juergen
    2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC, 2023, : 184 - 189