Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems

被引:1
|
作者
Gao, Jianhua [1 ]
Ji, Weixing [1 ]
Wang, Yizhuo [2 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-GPU system; sparse matrix-vector multiplication; data transmission hiding; sparse matrix partitioning; GMRES SOLVER;
D O I
10.1145/3676847
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse matrix-vector multiplication (SpMV) is one of the important kernels of many iterative algorithms for solving sparse linear systems. The limited storage and computational resources of individual GPUs restrict both the scale and speed of SpMV computing in problem-solving. As real-world engineering problems continue to increase in complexity, the imperative for collaborative execution of iterative solving algorithms across multiple GPUs is increasingly apparent. Although the multi-GPU-based SpMV takes less kernel execution time, it also introduces additional data transmission overhead, which diminishes the performance gains derived from parallelization across multi-GPUs. Based on the non-zero elements distribution characteristics of sparse matrices and the tradeoff between redundant computations and data transfer overhead, this article introduces a series of SpMV optimization techniques tailored for multi-GPU environments and effectively enhances the execution efficiency of iterative algorithms on multiple GPUs. First, we propose a two-level non-zero elements-based matrix partitioning method to increase the overlap of kernel execution and data transmission. Then, considering the irregular non-zero elements distribution in sparse matrices, a long-row- aware matrix partitioning method is proposed to hide more data transmissions. Finally, an optimization using redundant and inexpensive short-row execution to exchange costly data transmission is proposed. Our experimental evaluation demonstrates that, compared with the SpMV on a single GPU, the proposed method achieves an average speedup of 2.00x and 1.85x on platforms equipped with two RTX 3090 and two Tesla V100-SXM2, respectively. The average speedup of 2.65x is achieved on a platform equipped with four Tesla V100-SXM2.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] On improving the performance of sparse matrix-vector multiplication
    White, JB
    Sadayappan, P
    FOURTH INTERNATIONAL CONFERENCE ON HIGH-PERFORMANCE COMPUTING, PROCEEDINGS, 1997, : 66 - 71
  • [42] Sparse matrix-vector multiplication -: Final solution?
    Simecek, Ivan
    Tvrdik, Pavel
    PARALLEL PROCESSING AND APPLIED MATHEMATICS, 2008, 4967 : 156 - 165
  • [43] A TASK-SCHEDULING APPROACH FOR EFFICIENT SPARSE SYMMETRIC MATRIX-VECTOR MULTIPLICATION ON A GPU
    Mironowicz, P.
    Dziekonski, A.
    Mrozowski, M.
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2015, 37 (06): : C643 - C666
  • [44] A New Segmentation-Based GPU-Accelerated Sparse Matrix-Vector Multiplication
    He, Kai
    Tan, Sheldon X-D
    Tlelo-Cuautle, Esteban
    Wang, Hai
    Tang, He
    2014 IEEE 57TH INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2014, : 1013 - 1016
  • [45] Complex-valued matrix-vector multiplication system for a large-scale optical FFT
    Cao, Ziyu
    Zhang, Wenkai
    Zhou, Hailong
    Dong, Jianji
    Zhang, Xinliang
    OPTICS LETTERS, 2023, 48 (22) : 5871 - 5874
  • [46] High-Performance Matrix-Vector Multiplication on the GPU
    Sorensen, Hans Henrik Brandenborg
    EURO-PAR 2011: PARALLEL PROCESSING WORKSHOPS, PT I, 2012, 7155 : 377 - 386
  • [47] Multi-GPU acceleration of large-scale density-based topology optimization
    Herrero-Perez, David
    Martinez Castejon, Pedro J.
    ADVANCES IN ENGINEERING SOFTWARE, 2021, 157
  • [48] A multi-GPU algorithm for large-scale neuronal networks
    de Camargo, Raphael Y.
    Rozante, Luiz
    Song, Siang W.
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2011, 23 (06): : 556 - 572
  • [49] Large-Scale Graph Processing on Multi-GPU Platforms
    Zhang H.
    Zhang L.
    Wu Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2018, 55 (02): : 273 - 288
  • [50] Sparstition: A Partitioning Scheme for Large-Scale Sparse Matrix Vector Multiplication on FPGA
    Sigurbergsson, Bjorn
    Hogervorst, Tom
    Tong Dong Qiu
    Nane, Razvan
    2019 IEEE 30TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2019), 2019, : 51 - 58