NeuSB: A Scalable Interconnect Architecture for Spiking Neuromorphic Hardware

被引:4
|
作者
Balaji, Adarsha [1 ]
Huynh, Phu Khanh [1 ]
Catthoor, Francky [2 ]
Dutt, Nikil D. [3 ]
Krichmar, Jeffrey L. [3 ]
Das, Anup [1 ]
机构
[1] Drexel Univ, Dept Elect & Comp Engn, Philadelphia, PA 19104 USA
[2] Katholieke Univ Leuven, IMEC, B-3000 Leuven, Belgium
[3] Univ Calif Irvine, Dept Cognit Sci, Irvine, CA 92697 USA
基金
美国国家科学基金会;
关键词
Hardware; Neuromorphics; Computer architecture; Table lookup; Routing; Neurons; Machine learning; Segmented bus; neuromorphic; spiking neural networks; network-on-chip (NoC); non-volatile memory (NVM); NETWORK-ON-CHIP; SEGMENTED BUS; DESIGN; COMMUNICATION; EXPLORATION; NEURONS; ENERGY; NOC;
D O I
10.1109/TETC.2023.3238708
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neuromorphic systems are typically designed as a tile-based architecture where inter-tile data communication is facilitated using a shared global interconnect. Congestion on this interconnect can increase both interconnect energy, which increases the total energy consumption of the hardware and latency, which impacts the performance e.g., accuracy of the application that is being executed on the hardware. Mesh-based Network-on-Chip (NoC) that is used in most hardware prototypes is not the optimal interconnect solution for neuromorphic systems. This is because of the following two reasons. First, power consumption and average latency of a NoC increases exponentially with the number of tiles in the hardware. Second, a NoC cannot exploit an application's data communication pattern efficiently. Once designed for a target hardware, the bandwidth on each NoC link stays the same, independent of the volume of data traffic between different tile pairs of the NoC. In other words, a NoC cannot be customized at a finer granularity based on an individual application running on the hardware. We show that these NoC limitations prevent opportunities to further improve energy and latency of a neuromorphic hardware. To address these limitations, we propose Dynamic Segmented Bus (SB) interconnect for neuromorphic systems. Here, a bus lane is partitioned into segments with each segment connecting a few tiles. Connection of tiles to segments and those between segments are bridged using our novel three-way segmentation switches that are programmed using the software before admitting an application to the hardware. We partition an application by analyzing its workload and place partitions intelligently onto segments. This exploits application characteristics to use the segments without any routing collisions while exploiting the latency and energy savings in the design-time mapping phase. At a high-level, our mapping algorithm places tiles that communicate the most on shorter segments utilizing fewer number of switches, thereby reducing network congestion. It can adjust the bandwidth by controlling the number of segments connected to a destination tile. At run time, our controller dynamically executes the predefined routing paths without requiring any additionally routing decisions, unlike a NoC. This allows us to improve both energy and latency. Using parallel segmented busses, our proposed interconnect architecture can support a large number of tiles without significantly increasing the design cost, energy, and latency. Simulation results show that compared to the most widely-used mesh-based NoC design, our interconnect architecture, which we call NeuSB, reduces the switch area by 20x, average interconnect energy by 6.2x, and latency by 23%.
引用
收藏
页码:373 / 387
页数:15
相关论文
共 50 条
  • [1] A Mixed-Signal Spiking Neuromorphic Architecture for Scalable Neural Network
    Luo, Chong
    Ying, Zhaozhong
    Zhu, Xiaolei
    Chen, Longlong
    2017 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2017), VOL 1, 2017, : 179 - 182
  • [2] A Scalable Hardware Architecture for Multi-Layer Spiking Neural Networks
    Ying, Zhaozhong
    Luo, Chong
    Zhu, Xiaolei
    2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 839 - 842
  • [3] An efficient scalable parallel hardware architecture for multilayer spiking neural networks
    Nuno-Maganda, Marco Aurelio
    Arias-Estrada, Miguel
    Torres-Huitzil, Cesar
    2007 3RD SOUTHERN CONFERENCE ON PROGRAMMABLE LOGIC, PROCEEDINGS, 2007, : 167 - +
  • [4] SpikeNC: An Accurate and Scalable Simulator for Spiking Neural Network on Multi-Core Neuromorphic Hardware
    Xie, Lisheng
    Xue, Jianwei
    Wu, Liangshun
    Chen, Faquan
    Tian, Qingyang
    Zhou, Yifan
    Ying, Rendong
    Liu, Peilin
    2023 IEEE 30TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS, HIPC 2023, 2023, : 357 - 366
  • [5] A Review of Spiking Neuromorphic Hardware Communication Systems
    Young, Aaron R.
    Dean, Mark
    Plank, James S.
    Rose, Garrett S.
    IEEE ACCESS, 2019, 7 : 135606 - 135620
  • [6] A Digital Neuromorphic Hardware for Spiking Neural Network
    Fan, Yuanning
    Zou, Chenglong
    Liu, Kefei
    Kuang, Yisong
    Cui, Xiaoxin
    2019 IEEE INTERNATIONAL CONFERENCE ON ELECTRON DEVICES AND SOLID-STATE CIRCUITS (EDSSC), 2019,
  • [7] Spiking Neural Network Design for Neuromorphic Hardware
    Balaji, Adarsha
    2024 IEEE WORKSHOP ON MICROELECTRONICS AND ELECTRON DEVICES, WMED, 2024, : XVI - XVI
  • [8] Compiling Spiking Neural Networks to Neuromorphic Hardware
    Song, Shihao
    Balaji, Adarsha
    Das, Anup
    Kandasamy, Nagarajan
    Shackleford, James
    21ST ACM SIGPLAN/SIGBED CONFERENCE ON LANGUAGES, COMPILERS, AND TOOLS FOR EMBEDDED SYSTEMS (LCTES '20), 2020, : 38 - 50
  • [9] The backpropagation algorithm implemented on spiking neuromorphic hardware
    Renner, Alpha
    Sheldon, Forrest
    Zlotnik, Anatoly
    Tao, Louis
    Sornborger, Andrew
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [10] Mapping Spiking Neural Networks to Neuromorphic Hardware
    Balaji, Adarsha
    Das, Anup
    Wu, Yuefeng
    Huynh, Khanh
    Dell'Anna, Francesco G.
    Indiveri, Giacomo
    Krichmar, Jeffrey L.
    Dutt, Nikil D.
    Schaafsma, Siebren
    Catthoor, Francky
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (01) : 76 - 86