IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge

被引:2
|
作者
Morsali, Mehrdad [1 ]
Nazzal, Mahmoud [1 ]
Khreishah, Abdallah [1 ]
Angizi, Shaahin [1 ]
机构
[1] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
美国国家科学基金会;
关键词
graph neural network; in-memory computing; edge computing;
D O I
10.1145/3583781.3590248
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain similar to 790x communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation similar to 1400x faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.
引用
收藏
页码:3 / 8
页数:6
相关论文
共 50 条
  • [31] GNN at the Edge: Cost-Efficient Graph Neural Network Processing Over Distributed Edge Servers
    Zeng, Liekang
    Yang, Chongyu
    Huang, Peng
    Zhou, Zhi
    Yu, Shuai
    Chen, Xu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (03) : 720 - 739
  • [32] Eidetic: An In-Memory Matrix Multiplication Accelerator for Neural Networks
    Eckert, Charles
    Subramaniyan, Arun
    Wang, Xiaowei
    Augustine, Charles
    Iyer, Ravishankar
    Das, Reetuparna
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (06) : 1539 - 1553
  • [33] RNSnet: In-Memory Neural Network Acceleration Using Residue Number System
    Salamat, Sahand
    Imani, Mohsen
    Gupta, Sarangh
    Rosing, Tajana
    2018 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2018, : 219 - 230
  • [34] FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision
    Imani, Mohsen
    Gupta, Saransh
    Kim, Yeseong
    Rosing, Tajana
    PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 802 - 815
  • [35] Decentralized learning of randomization-based neural networks with centralized equivalence
    Liang, Xinyue
    Javid, Alireza M.
    Skoglund, Mikael
    Chatterjee, Saikat
    APPLIED SOFT COMPUTING, 2022, 115
  • [36] Decentralized Channel Management in WLANs with Graph Neural Networks
    Gao, Zhan
    Shao, Yulin
    Gunduz, Deniz
    Prorok, Amanda
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3072 - 3077
  • [37] Power Flow Balancing With Decentralized Graph Neural Networks
    Hansen, Jonas Berg
    Anfinsen, Stian Normann
    Bianchi, Filippo Maria
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (03) : 2423 - 2433
  • [38] Decentralized Statistical Inference with Unrolled Graph Neural Networks
    Wang, He
    Shen, Yifei
    Wang, Ziyuan
    Li, Dongsheng
    Zhang, Jun
    Letaief, Khaled B.
    Lu, Jie
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2634 - 2640
  • [39] Decentralized Wireless Resource Allocation with Graph Neural Networks
    Wang, Zhiyang
    Eisen, Mark
    Ribeiro, Alejandro
    2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 299 - 303
  • [40] NED-GNN: Detecting and Dropping Noisy Edges in Graph Neural Networks
    Xu, Ming
    Zhang, Baoming
    Yuan, Jinliang
    Cao, Meng
    Wang, Chongjun
    WEB AND BIG DATA, PT I, APWEB-WAIM 2022, 2023, 13421 : 91 - 105