Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware

被引:20
|
作者
Balaji, Adarsha [1 ]
Marty, Thibaut [2 ]
Das, Anup [1 ]
Catthoor, Francky [3 ]
机构
[1] Drexel Univ, Philadelphia, PA 19104 USA
[2] ENS Rennes, Rennes, Ille & Vilaine, France
[3] IMEC, Neuromorph Div, B-3001 Leuven, Belgium
基金
美国国家科学基金会;
关键词
Spiking Neural Networks (SNN); Neuromorphic computing; Internet of Things (IoT); Run-time; Mapping; DESIGN; SYSTEM;
D O I
10.1007/s11265-020-01573-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps - step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
引用
收藏
页码:1293 / 1302
页数:10
相关论文
共 50 条
  • [21] Mapping Spiking Neural Networks onto a Manycore Neuromorphic Architecture
    Lin, Chit-Kwan
    Wild, Andreas
    Chinya, Gautham N.
    Lin, Tsung-Han
    Davies, Mike
    Wang, Hong
    PROCEEDINGS OF THE 39TH ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION, PLDI 2018, 2018, : 78 - 89
  • [22] Optimal Mapping of Spiking Neural Network to Neuromorphic Hardware for Edge-AI
    Xiao, Chao
    Chen, Jihua
    Wang, Lei
    SENSORS, 2022, 22 (19)
  • [23] A Digital Neuromorphic Hardware for Spiking Neural Network
    Fan, Yuanning
    Zou, Chenglong
    Liu, Kefei
    Kuang, Yisong
    Cui, Xiaoxin
    2019 IEEE INTERNATIONAL CONFERENCE ON ELECTRON DEVICES AND SOLID-STATE CIRCUITS (EDSSC), 2019,
  • [24] Spiking Neural Network Design for Neuromorphic Hardware
    Balaji, Adarsha
    2024 IEEE WORKSHOP ON MICROELECTRONICS AND ELECTRON DEVICES, WMED, 2024, : XVI - XVI
  • [25] Biologically-inspired training of spiking recurrent neural networks with neuromorphic hardware
    Bohnstingl, Thomas
    Surina, Anja
    Fabre, Maxime
    Demirag, Yigit
    Frenkel, Charlotte
    Payvand, Melika
    Indiveri, Giacomo
    Pantazi, Angeliki
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 218 - 221
  • [26] Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks
    Shen, Juncheng
    Ma, De
    Gu, Zonghua
    Zhang, Ming
    Zhu, Xiaolei
    Xu, Xiaoqiang
    Xu, Qi
    Shen, Yangjing
    Pan, Gang
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (02) : 1 - 5
  • [27] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [28] Autocorrelations from emergent bistability in homeostatic spiking neural networks on neuromorphic hardware
    Cramer, Benjamin
    Kreft, Markus
    Billaudelle, Sebastian
    Karasenko, Vitali
    Leibfried, Aron
    Mueller, Eric
    Spilger, Philipp
    Weis, Johannes
    Schemmel, Johannes
    Munoz, Miguel A.
    Priesemann, Viola
    Zierenberg, Johannes
    PHYSICAL REVIEW RESEARCH, 2023, 5 (03):
  • [29] Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems
    Lemaire, Edgar
    Miramond, Benoit
    Bilavarn, Sebastien
    Saoud, Hadi
    Abderrahmane, Nassim
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [30] DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to Neuromorphic Hardware
    Song, Shihao
    Chong, Harry
    Balaji, Adarsha
    Das, Anup
    Shackleford, James
    Kandasamy, Nagarajan
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (03)