Distributed Temporal Graph Neural Network Learning over Large-Scale Dynamic Graphs

被引:0
|
作者
Fang, Ziquan [1 ]
Sun, Qichen [1 ]
Wang, Qilong [1 ]
Chen, Lu [1 ]
Gao, Yunjun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
关键词
Temporal Graph Neural Networks; Distributed Training;
D O I
10.1007/978-981-97-5779-4_4
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Temporal Graph Neural Networks (TGNNs) have achieved success in real-world graph-based applications. The increasing scale of dynamic graphs necessitates distributed training. However, deploying TGNNs in a distributed setting poses challenges due to the temporal dependencies in dynamic graphs, the need for computation balance during distributed training, and the non-ignorable communication costs across disjointed trainers. In this paper, we propose DisTGL, a distributed temporal graph neural network learning system. Leveraging a temporal-aware partitioning scheme and a series of enhanced communication techniques, DisTGL ensures efficient distributed computation and minimizes communication overhead. Based on that, DisTGL facilitates fast TGNN training and downstream tasks. An evaluation of DisTGL using various TGNN models shows that i) DisTGL achieves acceleration of up to 10x compared to existing TGNN frameworks; and ii) the proposed distributed dynamic graph partitioning reduces cross-machine operations by 25%, while the optimized communication reduce the costs by 1.5-2.5x.
引用
收藏
页码:51 / 66
页数:16
相关论文
共 50 条
  • [31] Accelerating Large-Scale Graph Neural Network Training on Crossbar Diet
    Ogbogu, Chukwufumnanya
    Arka, Aqeeb Iqbal
    Joardar, Biresh Kumar
    Doppa, Janardhan Rao
    Li, Hai
    Chakrabarty, Krishnendu
    Pande, Partha Pratim
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (11) : 3626 - 3637
  • [32] Dynamic Graph Neural Network Learning for Temporal Omics Data Prediction
    Jing, Xiaoli
    Zhou, Yanhong
    Shi, Min
    IEEE ACCESS, 2022, 10 : 116241 - 116252
  • [33] Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling
    Wang, Haotao
    Jiang, Ziyu
    You, Yuning
    Han, Yan
    Liu, Gaowen
    Srinivasa, Jayanth
    Kompella, Ramana Rao
    Wang, Zhangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Distributed large-scale graph processing on FPGAs
    Sahebi, Amin
    Barbone, Marco
    Procaccini, Marco
    Luk, Wayne
    Gaydadjiev, Georgi
    Giorgi, Roberto
    JOURNAL OF BIG DATA, 2023, 10 (01)
  • [35] On the Distributed Complexity of Large-Scale Graph Computations
    Pandurangan, Gopal
    Robinson, Peter
    Scquizzato, Michele
    ACM TRANSACTIONS ON PARALLEL COMPUTING, 2021, 8 (02)
  • [36] Dynamic programming neural network for large-scale optimization problems
    Hou, Zengguang
    Wu, Cangpu
    Zidonghua Xuebao/Acta Automatica Sinica, 1999, 25 (01): : 45 - 51
  • [37] A Distributed Algorithm for Large-Scale Graph Partitioning
    Rahimian, Fatemeh
    Payberah, Amir H.
    Girdzijauskas, Sarunas
    Jelasity, Mark
    Haridi, Seif
    ACM TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS, 2015, 10 (02)
  • [38] On the Distributed Complexity of Large-Scale Graph Computations
    Pandurangan, Gopal
    Robinson, Peter
    Scquizzato, Michele
    SPAA'18: PROCEEDINGS OF THE 30TH ACM SYMPOSIUM ON PARALLELISM IN ALGORITHMS AND ARCHITECTURES, 2018, : 405 - 414
  • [39] Distributed large-scale graph processing on FPGAs
    Amin Sahebi
    Marco Barbone
    Marco Procaccini
    Wayne Luk
    Georgi Gaydadjiev
    Roberto Giorgi
    Journal of Big Data, 10
  • [40] A hierarchical optimization neural network for large-scale dynamic systems
    Hou, ZG
    AUTOMATICA, 2001, 37 (12) : 1931 - 1940