Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

被引:0
|
作者
Diehl, Peter U. [1 ,2 ]
Zarrella, Guido [3 ]
Cassidy, Andrew [4 ]
Pedroni, Bruno U. [5 ]
Neftci, Emre [5 ,6 ]
机构
[1] Swiss Fed Inst Technol, Inst Neuroinformat, Zurich, Switzerland
[2] Univ Zurich, CH-8006 Zurich, Switzerland
[3] Mitre Corp, Burlington Rd, Bedford, MA 01730 USA
[4] IBM Res Almaden, San Jose, CA USA
[5] Univ Calif San Diego, Inst Neural Computat, La Jolla, CA USA
[6] UC Irvine, Dept Cognit Sci, Irvine, CA USA
关键词
DEEP;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of approximate to 17 mu W
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Assessment of Recurrent Spiking Neural Networks on Neuromorphic Accelerators for Naturalistic Texture Classification
    Ali, Haydar Al Haj
    Dabbous, Ali
    Ibrahim, Ali
    Valle, Maurizio
    2023 18TH CONFERENCE ON PH.D RESEARCH IN MICROELECTRONICS AND ELECTRONICS, PRIME, 2023, : 285 - 288
  • [42] SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks
    Liu, Fangxin
    Zhao, Wenbo
    Chen, Yongbiao
    Wang, Zongwu
    Jiang, Li
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1692 - 1701
  • [43] BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons
    Li, Yang
    Zhao, Dongcheng
    Zeng, Yi
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [44] SPIKING NEURAL NETWORKS TRAINED WITH BACKPROPAGATION FOR LOW POWER NEUROMORPHIC IMPLEMENTATION OF VOICE ACTIVITY DETECTION
    Martinelli, Flavio
    Dellaferrera, Giorgia
    Mainar, Pablo
    Cernak, Milos
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8544 - 8548
  • [45] A Design Flow for Mapping Spiking Neural Networks to Many-Core Neuromorphic Hardware
    Song, Shihao
    Varshika, M. Lakshmi
    Das, Anup
    Kandasamy, Nagarajan
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
  • [46] Conversion of Artificial Neural Network to Spiking Neural Network for Hardware Implementation
    Chen, Yi-Lun
    Lu, Chih-Cheng
    Juang, Kai-Cheung
    Tang, Kea-Tiong
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [47] Large-Scale Spiking Neural Networks using Neuromorphic Hardware Compatible Models
    Krichmar, Jeffrey L.
    Coussy, Philippe
    Dutt, Nikil
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2015, 11 (04)
  • [48] Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study
    Pfeil, Thomas
    Jordan, Jakob
    Tetzlaff, Tom
    Gruebl, Andreas
    Schemmel, Johannes
    Diesmann, Markus
    Meier, Karlheinz
    PHYSICAL REVIEW X, 2016, 6 (02):
  • [49] Dataset Conversion for Spiking Neural Networks
    Sadovsky, Erik
    Jakubec, Maros
    Jarinova, Darina
    Jarina, Roman
    2023 33RD INTERNATIONAL CONFERENCE RADIOELEKTRONIKA, RADIOELEKTRONIKA, 2023,
  • [50] The Implementation and Optimization of Neuromorphic Hardware for Supporting Spiking Neural Networks With MLP and CNN Topologies
    Ye, Wujian
    Chen, Yuehai
    Liu, Yijun
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (02) : 448 - 461