Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks

被引:12
|
作者
Sommer, Jan [1 ]
Ozkan, M. Akif [1 ]
Keszocze, Oliver [2 ]
Teich, Juergen [2 ]
机构
[1] Friedrich Alexander Univ Erlangen Nurnberg, Chair Hardware Software Codesign, D-91058 Erlangen, Germany
[2] Max Planck Inst Sci Light, D-91058 Erlangen, Germany
关键词
Event-based processing; field-programmable gate array (FPGA); hardware acceleration; spiking convolutional neural networks (SNNs);
D O I
10.1109/TCAD.2022.3197512
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Spiking neural networks (SNNs) compute in an event-based manner to achieve a more efficient computation than standard neural networks. In SNNs, neuronal outputs are not encoded as real-valued activations but as sequences of binary spikes. The motivation of using SNNs over conventional neural networks is rooted in the special computational aspects of spike-based processing, especially the high degree of sparsity of spikes. Well-established implementations of convolutional neural networks (CNNs) feature large spatial arrays of processing elements (PEs) that remain highly underutilized in the face of activation sparsity. We propose a novel architecture optimized for the processing of convolutional SNNs (CSNNs) featuring a high degree of sparsity. The proposed architecture consists of an array of PEs of the size of the kernel of a convolution and an intelligent spike queue that provides a high PE utilization. A constant flow of spikes is ensured by compressing the feature maps into queues that can then be processed spike-by-spike. This compression is performed at run-time, leading to a self-timed schedule. This allows the processing time to scale with the number of spikes. Also, a novel memory organization scheme is introduced to efficiently store and retrieve the membrane potentials of the individual neurons using multiple small parallel on-chip RAMs. Each RAM is hardwired to its PE, reducing switching circuitry. We implemented the proposed architecture on an FPGA and achieved a significant speedup compared to previously proposed SNN implementations (similar to 10 times) while needing less hardware resources and maintaining a higher energy efficiency (similar to 15 times).
引用
收藏
页码:3767 / 3778
页数:12
相关论文
共 50 条
  • [1] Efficient Hardware Acceleration of Convolutional Neural Networks
    Kala, S.
    Jose, Babita R.
    Mathew, Jimson
    Nalesh, S.
    32ND IEEE INTERNATIONAL SYSTEM ON CHIP CONFERENCE (IEEE SOCC 2019), 2019, : 191 - 192
  • [2] A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
    Ghimire, Deepak
    Kil, Dayoung
    Kim, Seong-heum
    ELECTRONICS, 2022, 11 (06)
  • [3] Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration
    Lin, Jeng-Hau
    Xing, Tianwei
    Zhao, Ritchie
    Zhang, Zhiru
    Srivastava, Mani
    Tu, Zhuowen
    Gupta, Rajesh K.
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 344 - 352
  • [4] Compressing Sparse Ternary Weight Convolutional Neural Networks for Efficient Hardware Acceleration
    Wi, Hyeonwook
    Kim, Hyeonuk
    Choi, Seungkyu
    Kim, Lee-Sup
    2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2019,
  • [5] An Efficient Hardware Architecture for Multilayer Spiking Neural Networks
    Luo, Yuling
    Wan, Lei
    Liu, Junxiu
    Zhang, Jinlei
    Cao, Yi
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT VI, 2017, 10639 : 786 - 795
  • [6] Unsupervised and efficient learning in sparsely activated convolutional spiking neural networks enabled by voltage-dependent synaptic plasticity
    Goupy, Gaspard
    Juneau-Fecteau, Alexandre
    Garg, Nikhil
    Balafrej, Ismael
    Alibart, Fabien
    Frechette, Luc
    Drouin, Dominique
    Beilliard, Yann
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2023, 3 (01):
  • [7] Design of Convolutional Neural Networks Hardware Acceleration Based on FPGA
    Qin Huabiao
    Cao Qinping
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (11) : 2599 - 2605
  • [8] Design of Convolutional Neural Networks Hardware Acceleration Based on FPGA
    Qin H.
    Cao Q.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2019, 41 (11): : 2599 - 2605
  • [9] Hardware Acceleration Design of Convolutional Neural Networks Based on FPGA
    Zhang, Guoning
    Hu, Jing
    Li, Laiquan
    Jiang, Haoyang
    2024 9TH INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY AND INFORMATION SCIENCE, ICETIS 2024, 2024, : 11 - 15
  • [10] A Configurable and Versatile Architecture for Low Power, Energy Efficient Hardware Acceleration of Convolutional Neural Networks
    Christensen, Steinar Thune
    Aunet, Snorre
    Qadir, Omer
    2019 IEEE NORDIC CIRCUITS AND SYSTEMS CONFERENCE (NORCAS) - NORCHIP AND INTERNATIONAL SYMPOSIUM OF SYSTEM-ON-CHIP (SOC), 2019,