A Sparse Matrix Optimization Method for Graph Neural Networks Training

被引:0
|
作者
Yao, Tiechui [1 ,2 ]
Wang, Jue [1 ,2 ]
Gu, Junyu [1 ,2 ]
Shi, Yumeng [1 ,2 ]
Liu, Fang [1 ,2 ]
Wang, Xiaoguang [2 ]
Wang, Yangang [1 ,2 ]
Chi, Xuebin [1 ,2 ]
机构
[1] Chinese Acad Sci, Comp Network Informat Ctr, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Sparse matrix format; Sparse matrix-vector multiplication; Performance model; Graph neural networks;
D O I
10.1007/978-3-031-40283-8_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNN) have shown great application potential in scientific research applications, biomedicine, and other fields, which exhibit superior feature representation capabilities for graph data with non-Euclidean structures. These capabilities are enabled efficiently by sparse matrix-matrix multiplication (SPMM) and sparse matrix-vector multiplication (SPMV) that operate on sparse matrix representations of graph structures. However, SpMM has the characteristics of high memory occupation and irregular memory access, which leads to low storage and computational efficiency. To address the above issues, this paper proposes a sparse matrix optimization method, including a sparse matrix format and a performance model. The format, namely BMCOO, divides the sparse matrix into multiple blocks and adopts the bitmap to compress the position information of non-zero elements in each block. This paper further designs an SpMV algorithm in BMCOO format on GPU. In addition, a multi-channel SpMV performance model is constructed to predict the execution time of SpMV by combining the sparse matrix scale and system architecture parameters. Then the performance model fine-tunes the graph partitioning of the GNN training process. Experiments on the SuiteSparse and the Open Graph Benchmark datasets verify the effectiveness and superiority of the proposed method.
引用
收藏
页码:114 / 123
页数:10
相关论文
共 50 条
  • [1] Optimizing Sparse Matrix Multiplications for Graph Neural Networks
    Qiu, Shenghao
    You, Liang
    Wang, Zheng
    LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING (LCPC 2021), 2022, 13181 : 101 - 117
  • [2] Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
    Liu, Chuang
    Ma, Xueqi
    Zhan, Yibing
    Ding, Liang
    Tao, Dapeng
    Du, Bo
    Hu, Wenbin
    Mandic, Danilo P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 14903 - 14917
  • [3] Training Sparse Graph Neural Networks via Pruning and Sprouting
    Ma, Xueqi
    Ma, Xingjun
    Erfani, Sarah
    Bailey, James
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 136 - 144
  • [4] Modularity Optimization as a Training Criterion for Graph Neural Networks
    Murata, Tsuyoshi
    Afzal, Naveed
    COMPLEX NETWORKS IX, 2018, : 123 - 135
  • [5] Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks
    Gatti, Alice
    Hu, Zhixiong
    Smidt, Tess
    Ng, Esmond G.
    Ghysels, Pieter
    Journal of Machine Learning Research, 2022, 23
  • [6] Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks
    Gatti, Alice
    Hu, Zhixiong
    Smidt, Tess
    Ng, Esmond G.
    Ghysels, Pieter
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [7] An Efficient Optical Sparse Matrix Multiplication Accelerator for Graph Neural Networks
    Jia, Ying
    Guo, Hongxiang
    Guo, Yi
    Wu, Jian
    2022 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE, ACP, 2022, : 1868 - 1872
  • [8] Fast Sparse GPU Kernels for Accelerated Training of Graph Neural Networks
    Fant, Ruibo
    Wang, Wei
    Chu, Xiaowen
    2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS, 2023, : 501 - 511
  • [9] Training Sparse Neural Networks
    Srinivas, Suraj
    Subramanya, Akshayvarun
    Babu, R. Venkatesh
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 455 - 462
  • [10] Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels
    Wang, Fali
    Zhao, Tianxiang
    Wang, Suhang
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 712 - 720