Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems

被引:1
|
作者
Gao, Jianhua [1 ]
Ji, Weixing [1 ]
Wang, Yizhuo [2 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-GPU system; sparse matrix-vector multiplication; data transmission hiding; sparse matrix partitioning; GMRES SOLVER;
D O I
10.1145/3676847
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse matrix-vector multiplication (SpMV) is one of the important kernels of many iterative algorithms for solving sparse linear systems. The limited storage and computational resources of individual GPUs restrict both the scale and speed of SpMV computing in problem-solving. As real-world engineering problems continue to increase in complexity, the imperative for collaborative execution of iterative solving algorithms across multiple GPUs is increasingly apparent. Although the multi-GPU-based SpMV takes less kernel execution time, it also introduces additional data transmission overhead, which diminishes the performance gains derived from parallelization across multi-GPUs. Based on the non-zero elements distribution characteristics of sparse matrices and the tradeoff between redundant computations and data transfer overhead, this article introduces a series of SpMV optimization techniques tailored for multi-GPU environments and effectively enhances the execution efficiency of iterative algorithms on multiple GPUs. First, we propose a two-level non-zero elements-based matrix partitioning method to increase the overlap of kernel execution and data transmission. Then, considering the irregular non-zero elements distribution in sparse matrices, a long-row- aware matrix partitioning method is proposed to hide more data transmissions. Finally, an optimization using redundant and inexpensive short-row execution to exchange costly data transmission is proposed. Our experimental evaluation demonstrates that, compared with the SpMV on a single GPU, the proposed method achieves an average speedup of 2.00x and 1.85x on platforms equipped with two RTX 3090 and two Tesla V100-SXM2, respectively. The average speedup of 2.65x is achieved on a platform equipped with four Tesla V100-SXM2.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] Sparse Matrix-Vector Multiplication on GPGPUs
    Filippone, Salvatore
    Cardellini, Valeria
    Barbieri, Davide
    Fanfarillo, Alessandro
    ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2017, 43 (04):
  • [22] Optimization of Sparse Matrix-Vector Multiplication with Variant CSR on GPUs
    Feng, Xiaowen
    Jin, Hai
    Zheng, Ran
    Hu, Kan
    Zeng, Jingxiang
    Shao, Zhiyuan
    2011 IEEE 17TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2011, : 165 - 172
  • [23] Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
    Williams, Samuel
    Oliker, Leonid
    Vuduc, Richard
    Shalf, John
    Yelick, Katherine
    Demmel, James
    2007 ACM/IEEE SC07 CONFERENCE, 2010, : 637 - +
  • [24] An Extended Compression Format for the Optimization of Sparse Matrix-Vector Multiplication
    Karakasis, Vasileios
    Gkountouvas, Theodoros
    Kourtis, Kornilios
    Goumas, Georgios
    Koziris, Nectarios
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2013, 24 (10) : 1930 - 1940
  • [25] Optimization of sparse matrix-vector multiplication on emerging multicore platforms
    Williams, Samuel
    Oliker, Leonid
    Vuduc, Richard
    Shalf, John
    Yelick, Katherine
    Demmel, James
    PARALLEL COMPUTING, 2009, 35 (03) : 178 - 194
  • [26] DeltaSPARSE: High-Performance Sparse General Matrix-Matrix Multiplication on Multi-GPU Systems
    Yang, Shuai
    Zhang, Changyou
    Ma, Ji
    2023 IEEE 30TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS, HIPC 2023, 2023, : 194 - 202
  • [27] Performance Evaluation of Sparse Matrix-Vector Multiplication Using GPU/MIC Cluster
    Maeda, Hiroshi
    Takahashi, Daisuke
    PROCEEDINGS OF 2015 THIRD INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2015, : 396 - 399
  • [28] TaiChi: A Hybrid Compression Format for Binary Sparse Matrix-Vector Multiplication on GPU
    Gao, Jianhua
    Ji, Weixing
    Tan, Zhaonian
    Wang, Yizhuo
    Shi, Feng
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3732 - 3745
  • [29] Efficient dense matrix-vector multiplication on GPU
    He, Guixia
    Gao, Jiaquan
    Wang, Jun
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2018, 30 (19):
  • [30] CuSNMF: A Sparse Non-negative Matrix Factorization Approach for Large-Scale Collaborative Filtering Recommender Systems on Multi-GPU
    Li, Hao
    Li, Kenli
    Peng, Jiwu
    Li, Keqin
    2017 15TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS AND 2017 16TH IEEE INTERNATIONAL CONFERENCE ON UBIQUITOUS COMPUTING AND COMMUNICATIONS (ISPA/IUCC 2017), 2017, : 1144 - 1151