SpikeSim: An End-to-End Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks

被引:9
|
作者
Moitra, Abhishek [1 ]
Bhattacharjee, Abhiroop [1 ]
Kuang, Runcong [2 ]
Krishnan, Gokul [3 ]
Cao, Yu [2 ]
Panda, Priyadarshini [1 ]
机构
[1] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
[2] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85287 USA
[3] Meta Real Labs, Redmond, WA USA
基金
美国国家科学基金会;
关键词
Analog crossbars; emerging devices; in-memory computing (IMC); spiking neural networks (SNNs);
D O I
10.1109/TCAD.2023.3274918
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Spiking neural networks (SNNs) are an active research domain toward energy-efficient machine intelligence. Compared to conventional artificial neural networks (ANNs), SNNs use temporal spike data and bio-plausible neuronal activation functions such as leaky-integrate fire/integrate fire (LIF/IF) for data processing. However, SNNs incur significant dot-product operations causing high memory and computation overhead in standard von-Neumann computing platforms. To this end, in memory computing (IMC) architectures have been proposed to alleviate the "memory-wall bottleneck" prevalent in von Neumann architectures. Although recent works have proposed IMC-based SNN hardware accelerators, the following key implementation aspects have been overlooked: 1) the adverse effects of crossbar nonideality on SNN performance due to repeated analog dot-product operations over multiple time-steps and 2) hardware overheads of essential SNN-specific components, such as the LIF/IF and data communication modules. To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim consists of a practical monolithic IMC architecture called SpikeFlow for mapping SNNs. Additionally, the nonideality computation engine (NICE) and energy-latency-area (ELA) engine performs hardware-realistic evaluation of SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal module has significant area contribution (> 11% of the total hardware area). To this end, we propose SNN topological modifications that leads to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value, respectively. Furthermore, in this work, we perform a holistic comparison between IMC implemented ANN and SNNs and conclude that lower number of time-steps are the key to achieve higher throughput and energy-efficiency for SNNs compared to 4-bit ANNs. The code repository for the SpikeSim tool is available at Github link.
引用
收藏
页码:3815 / 3828
页数:14
相关论文
共 50 条
  • [21] End-to-End Text Recognition with Convolutional Neural Networks
    Wang, Tao
    Wu, David J.
    Coates, Adam
    Ng, Andrew Y.
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 3304 - 3308
  • [22] Exploring Compute-in-Memory Architecture Granularity for Structured Pruning of Neural Networks
    Meng, Fan-Hsuan
    Wang, Xinxin
    Wang, Ziyu
    Lee, Eric Yeu-Jer
    Lu, Wei D.
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2022, 12 (04) : 858 - 866
  • [23] End-to-End Bayesian Networks Exact Learning in Shared Memory
    Karan, Subhadeep
    Sayed, Zainul Abideen
    Zola, Jaroslaw
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (04) : 634 - 645
  • [24] Heterogeneous integration of 2D memristor arrays and silicon selectors for compute-in-memory hardware in convolutional neural networks
    Samarth Jain
    Sifan Li
    Haofei Zheng
    Lingqi Li
    Xuanyao Fong
    Kah-Wee Ang
    Nature Communications, 16 (1)
  • [25] Evaluation of end-to-end QoS mechanisms in IP networks
    Shaikh, FA
    McClellan, S
    NETWORKING - ICN 2001, PART II, PROCEEDINGS, 2001, 2094 : 579 - 589
  • [26] End-to-End Spiking Neural Network for Speech Recognition Using Resonating Input Neurons
    Auge, Daniel
    Hille, Julian
    Kreutz, Felix
    Mueller, Etienne
    Knoll, Alois
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2021, PT V, 2021, 12895 : 245 - 256
  • [27] DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks
    Chen, Huili
    Fu, Cheng
    Rouhani, Bita Darvish
    Zhao, Jishen
    Koushanfar, Farinaz
    PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 487 - 498
  • [28] An End-to-End Compression Framework Based on Convolutional Neural Networks
    Jiang, Feng
    Tao, Wen
    Liu, Shaohui
    Ren, Jie
    Guo, Xun
    Zhao, Debin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) : 3007 - 3018
  • [29] Segmental Recurrent Neural Networks for End-to-end Speech Recognition
    Lu, Liang
    Kong, Lingpeng
    Dyer, Chris
    Smith, Noah A.
    Renals, Steve
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 385 - 389
  • [30] End-to-end learning of user equilibrium with implicit neural networks
    Liu, Zhichen
    Yin, Yafeng
    Bai, Fan
    Grimm, Donald K.
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 150