Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware

被引:20
|
作者
Balaji, Adarsha [1 ]
Marty, Thibaut [2 ]
Das, Anup [1 ]
Catthoor, Francky [3 ]
机构
[1] Drexel Univ, Philadelphia, PA 19104 USA
[2] ENS Rennes, Rennes, Ille & Vilaine, France
[3] IMEC, Neuromorph Div, B-3001 Leuven, Belgium
基金
美国国家科学基金会;
关键词
Spiking Neural Networks (SNN); Neuromorphic computing; Internet of Things (IoT); Run-time; Mapping; DESIGN; SYSTEM;
D O I
10.1007/s11265-020-01573-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps - step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
引用
收藏
页码:1293 / 1302
页数:10
相关论文
共 50 条
  • [41] Efficient Hardware Acceleration of Spiking Neural Networks using FPGA: Towards Real-Time Edge Neuromorphic Computing
    El Maachi, Soukaina
    Chehri, Abdellah
    Saadane, Rachid
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [42] Run-time Performance Monitoring of Hardware Accelerators
    Madronal, Daniel
    Fanni, Tiziana
    CF '19 - PROCEEDINGS OF THE 16TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS, 2019, : 289 - 291
  • [43] A method for fast hardware specialization at run-time
    Bruneel, Karel
    Bertels, Peter
    Stroobandt, Dirk
    2007 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS, PROCEEDINGS, VOLS 1 AND 2, 2007, : 35 - 40
  • [44] Neuromorphic Architectures for Spiking Deep Neural Networks
    Indiveri, Giacomo
    Corradi, Federico
    Qiao, Ning
    2015 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2015,
  • [45] Classification of multivariate data with a spiking neural network on neuromorphic hardware
    Michael Schmuker
    Thomas Pfeil
    Martin P Nawrot
    BMC Neuroscience, 14 (Suppl 1)
  • [46] Modular building blocks for mapping spiking neural networks onto a programmable neuromorphic processor
    Zou, Chenglong
    Cui, Xiaoxin
    Chen, Guang
    Feng, Shuo
    Liu, Kefei
    Wang, Xinan
    Wang, Yuan
    MICROELECTRONICS JOURNAL, 2022, 129
  • [47] EENet: Energy Efficient Neural Networks with Run-time Power Management
    Li, Xiangjie
    Shen, Yingtao
    Zou, An
    Ma, Yehan
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [48] Time-Coded Spiking Fourier Transform in Neuromorphic Hardware
    Lopez-Randulfe, Javier
    Reeb, Nico
    Karimi, Negin
    Liu, Chen
    Gonzalez, Hector A.
    Dietrich, Robin
    Vogginger, Bernhard
    Mayr, Christian
    Knoll, Alois
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (11) : 2792 - 2802
  • [49] A generalized hardware architecture for real-time spiking neural networks
    Valencia, Daniel
    Alimohammad, Amir
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (24): : 17821 - 17835
  • [50] A generalized hardware architecture for real-time spiking neural networks
    Daniel Valencia
    Amir Alimohammad
    Neural Computing and Applications, 2023, 35 : 17821 - 17835