A Sparse Matrix Optimization Method for Graph Neural Networks Training

被引:0
|
作者
Yao, Tiechui [1 ,2 ]
Wang, Jue [1 ,2 ]
Gu, Junyu [1 ,2 ]
Shi, Yumeng [1 ,2 ]
Liu, Fang [1 ,2 ]
Wang, Xiaoguang [2 ]
Wang, Yangang [1 ,2 ]
Chi, Xuebin [1 ,2 ]
机构
[1] Chinese Acad Sci, Comp Network Informat Ctr, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Sparse matrix format; Sparse matrix-vector multiplication; Performance model; Graph neural networks;
D O I
10.1007/978-3-031-40283-8_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNN) have shown great application potential in scientific research applications, biomedicine, and other fields, which exhibit superior feature representation capabilities for graph data with non-Euclidean structures. These capabilities are enabled efficiently by sparse matrix-matrix multiplication (SPMM) and sparse matrix-vector multiplication (SPMV) that operate on sparse matrix representations of graph structures. However, SpMM has the characteristics of high memory occupation and irregular memory access, which leads to low storage and computational efficiency. To address the above issues, this paper proposes a sparse matrix optimization method, including a sparse matrix format and a performance model. The format, namely BMCOO, divides the sparse matrix into multiple blocks and adopts the bitmap to compress the position information of non-zero elements in each block. This paper further designs an SpMV algorithm in BMCOO format on GPU. In addition, a multi-channel SpMV performance model is constructed to predict the execution time of SpMV by combining the sparse matrix scale and system architecture parameters. Then the performance model fine-tunes the graph partitioning of the GNN training process. Experiments on the SuiteSparse and the Open Graph Benchmark datasets verify the effectiveness and superiority of the proposed method.
引用
收藏
页码:114 / 123
页数:10
相关论文
共 50 条
  • [21] Adaptive Parallel Training for Graph Neural Networks
    Ma, Kaihao
    Liu, Renjie
    Yan, Xiao
    Cai, Zhenkun
    Song, Xiang
    Wang, Minjie
    Li, Yichao
    Cheng, James
    PROCEEDINGS OF THE 2025 THE 30TH ACM SIGPLAN ANNUAL SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, PPOPP 2025, 2025, : 29 - 42
  • [22] Training Graph Neural Networks by Graphon Estimation
    Hu, Ziqing
    Fang, Yihao
    Lin, Lizhen
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 5153 - 5162
  • [23] Training Graph Neural Networks with 1000 Layers
    Li, Guohao
    Muller, Matthias
    Ghanem, Bernard
    Koltun, Vladlen
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [24] Batched Sparse Matrix Multiplication for Accelerating Graph Convolutional Networks
    Nagasaka, Yusuke
    Nukada, Akira
    Kojima, Ryosuke
    Matsuoka, Satoshi
    2019 19TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), 2019, : 231 - 240
  • [25] Neural Graph Learning: Training Neural Networks Using Graphs
    Bui, Thang D.
    Ravi, Sujith
    Ramavajjala, Vivek
    WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 64 - 71
  • [26] An Intelligent Robustness Optimization Method for Internet of Things Using Graph Neural Networks
    Peng, Ya-Bin
    Liu, Cai-Xia
    Liu, Shu-Xin
    Wang, Kai
    2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 171 - 175
  • [27] DCOM-GNN: A Deep Clustering Optimization Method for Graph Neural Networks
    Yang, Haoran
    Wang, Junli
    Duan, Rui
    Yan, Chungang
    KNOWLEDGE-BASED SYSTEMS, 2023, 279
  • [28] Compressing Deep Neural Networks With Sparse Matrix Factorization
    Wu, Kailun
    Guo, Yiwen
    Zhang, Changshui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 3828 - 3838
  • [29] Ray-guided global optimization method for training neural networks
    Zhang, XM
    Chen, YQ
    NEUROCOMPUTING, 2000, 30 (1-4) : 333 - 337
  • [30] Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
    Funke, Thorben
    Khosla, Megha
    Rathee, Mandeep
    Anand, Avishek
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 8687 - 8698