A Sparse Matrix Optimization Method for Graph Neural Networks Training

被引:0
|
作者
Yao, Tiechui [1 ,2 ]
Wang, Jue [1 ,2 ]
Gu, Junyu [1 ,2 ]
Shi, Yumeng [1 ,2 ]
Liu, Fang [1 ,2 ]
Wang, Xiaoguang [2 ]
Wang, Yangang [1 ,2 ]
Chi, Xuebin [1 ,2 ]
机构
[1] Chinese Acad Sci, Comp Network Informat Ctr, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Sparse matrix format; Sparse matrix-vector multiplication; Performance model; Graph neural networks;
D O I
10.1007/978-3-031-40283-8_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNN) have shown great application potential in scientific research applications, biomedicine, and other fields, which exhibit superior feature representation capabilities for graph data with non-Euclidean structures. These capabilities are enabled efficiently by sparse matrix-matrix multiplication (SPMM) and sparse matrix-vector multiplication (SPMV) that operate on sparse matrix representations of graph structures. However, SpMM has the characteristics of high memory occupation and irregular memory access, which leads to low storage and computational efficiency. To address the above issues, this paper proposes a sparse matrix optimization method, including a sparse matrix format and a performance model. The format, namely BMCOO, divides the sparse matrix into multiple blocks and adopts the bitmap to compress the position information of non-zero elements in each block. This paper further designs an SpMV algorithm in BMCOO format on GPU. In addition, a multi-channel SpMV performance model is constructed to predict the execution time of SpMV by combining the sparse matrix scale and system architecture parameters. Then the performance model fine-tunes the graph partitioning of the GNN training process. Experiments on the SuiteSparse and the Open Graph Benchmark datasets verify the effectiveness and superiority of the proposed method.
引用
收藏
页码:114 / 123
页数:10
相关论文
共 50 条
  • [31] Efficient and effective training of sparse recurrent neural networks
    Shiwei Liu
    Iftitahu Ni’mah
    Vlado Menkovski
    Decebal Constantin Mocanu
    Mykola Pechenizkiy
    Neural Computing and Applications, 2021, 33 : 9625 - 9636
  • [32] Symbolic Hyperdimensional Vectors with Sparse Graph Convolutional Neural Networks
    Cornell, Filip
    Karlgren, Jussi
    Animesh
    Girdzijauskas, Sarunas
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [33] Sparse norm regularized attribute selection for graph neural networks
    Jiang, Bo
    Wang, Beibei
    Luo, Bin
    PATTERN RECOGNITION, 2023, 137
  • [34] Efficient and effective training of sparse recurrent neural networks
    Liu, Shiwei
    Ni'mah, Iftitahu
    Menkovski, Vlado
    Mocanu, Decebal Constantin
    Pechenizkiy, Mykola
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15): : 9625 - 9636
  • [35] Performance of Training Sparse Deep Neural Networks on GPUs
    Wang, Jianzong
    Huang, Zhangcheng
    Kong, Lingwei
    Xiao, Jing
    Wang, Pengyu
    Zhang, Lu
    Li, Chao
    2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [36] Graph neural networks for deep portfolio optimization
    Ekmekcioglu, Omer
    Pinar, Mustafa C.
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (28): : 20663 - 20674
  • [37] Graph neural networks for deep portfolio optimization
    Ömer Ekmekcioğlu
    Mustafa Ç. Pınar
    Neural Computing and Applications, 2023, 35 : 20663 - 20674
  • [38] Combinatorial Optimization and Reasoning with Graph Neural Networks
    Cappart, Quentin
    Chetelat, Didier
    Khalil, Elias B.
    Lodi, Andrea
    Morris, Christopher
    Velickovic, Petar
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4348 - 4355
  • [39] Training Optimization for Artificial Neural Networks
    Toribio Luna, Primitivo
    Alejo Eleuterio, Roberto
    Valdovinos Rosas, Rosa Maria
    Rodriguez Mendez, Benjamin Gonzalo
    CIENCIA ERGO-SUM, 2010, 17 (03) : 313 - 317
  • [40] Combinatorial Optimization and Reasoning with Graph Neural Networks
    Cappart, Quentin
    Chetelat, Didier
    Khalil, Elias B.
    Lodi, Andrea
    Morris, Christopher
    Velickovic, Petar
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24