Morph Algorithms on GPUs

被引:54
|
作者
Nasre, Rupesh [1 ]
Burtscher, Martin [2 ]
Pingali, Keshav [1 ,3 ]
机构
[1] Univ Texas Austin, Inst Computat Engn & Sci, Austin, TX 78712 USA
[2] SW Texas State Univ, Dept Comp Sci, San Marcos, TX 78666 USA
[3] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
基金
美国国家科学基金会;
关键词
Algorithms; Languages; Performance; Morph Algorithms; Graph Algorithms; Irregular Programs; GPU; CUDA; Delaunay Mesh Refinement; Survey Propagation; Minimum Spanning Tree; Boruvka; Points-to Analysis; GRAPH ALGORITHMS; PARALLELISM; CUDA;
D O I
10.1145/2517327.2442531
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
There is growing interest in using GPUs to accelerate graph algorithms such as breadth-first search, computing page-ranks, and finding shortest paths. However, these algorithms do not modify the graph structure, so their implementation is relatively easy compared to general graph algorithms like mesh generation and refinement, which morph the underlying graph in non-trivial ways by adding and removing nodes and edges. We know relatively little about how to implement morph algorithms efficiently on GPUs. In this paper, we present and study four morph algorithms: (i) a computational geometry algorithm called Delaunay Mesh Refinement (DMR), (ii) an approximate SAT solver called Survey Propagation (SP), (iii) a compiler analysis called Points-to Analysis (PTA), and (iv) Boruvka's Minimum Spanning Tree algorithm (MST). Each of these algorithms modifies the graph data structure in different ways and thus poses interesting challenges. We overcome these challenges using algorithmic and GPU-specific optimizations. We propose efficient techniques to perform concurrent subgraph addition, subgraph deletion, conflict detection and several optimizations to improve the scalability of morph algorithms. For an input mesh with 10 million triangles, our DMR code achieves an 80x speedup over the highly optimized serial Triangle program and a 2.3x speedup over a multicore implementation running with 48 threads. Our SP code is 3x faster than a multicore implementation with 48 threads on an input with 1 million literals. The PTA implementation is able to analyze six SPEC 2000 benchmark programs in just 74 milliseconds, achieving a geometric mean speedup of 9.3x over a 48-thread multicore version. Our MST code is slower than a multicore version with 48 threads for sparse graphs but significantly faster for denser graphs. This work provides several insights into how other morph algorithms can be efficiently implemented on GPUs.
引用
收藏
页码:147 / 156
页数:10
相关论文
共 50 条
  • [41] Implementation of Algorithms with a Fine-Grained Parallelism on GPUs
    Kalgin, K. V.
    NUMERICAL ANALYSIS AND APPLICATIONS, 2011, 4 (01) : 46 - 55
  • [42] Speeding up the evaluation phase of GP classification algorithms on GPUs
    Alberto Cano
    Amelia Zafra
    Sebastián Ventura
    Soft Computing, 2012, 16 : 187 - 202
  • [43] Fast Equi-Join Algorithms on GPUs: Design and Implementation
    Rui, Ran
    Tu, Yi-Cheng
    SSDBM 2017: 29TH INTERNATIONAL CONFERENCE ON SCIENTIFIC AND STATISTICAL DATABASE MANAGEMENT, 2017,
  • [44] Protein alignment algorithms with an efficient backtracking routine on multiple GPUs
    Jacek Blazewicz
    Wojciech Frohmberg
    Michal Kierzynka
    Erwin Pesch
    Pawel Wojciechowski
    BMC Bioinformatics, 12
  • [45] RDMA-Based Algorithms for Sparse Matrix Multiplication on GPUs
    Brock, Benjamin
    Buluc, Aydin
    Yelick, Katherine
    PROCEEDINGS OF THE 38TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ACM ICS 2024, 2024, : 225 - 235
  • [46] Protein alignment algorithms with an efficient backtracking routine on multiple GPUs
    Blazewicz, Jacek
    Frohmberg, Wojciech
    Kierzynka, Michal
    Pesch, Erwin
    Wojciechowski, Pawel
    BMC BIOINFORMATICS, 2011, 12
  • [47] On the efficiency of iterative ordered subset reconstruction algorithms for acceleration on GPUs
    Xu, Fang
    Xu, Wei
    Jones, Mel
    Keszthelyi, Bettina
    Sedat, John
    Agard, David
    Mueller, Klaus
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2010, 98 (03) : 261 - 270
  • [48] A calibrated asymptotic framework for analyzing packet classification algorithms on GPUs
    M. Abbasi
    M. Rafiee
    The Journal of Supercomputing, 2019, 75 : 6574 - 6611
  • [49] Enabling Efficient Fast Convolution Algorithms on GPUs via MegaKernels
    Jia, Liancheng
    Liang, Yun
    Li, Xiuhong
    Lu, Liqiang
    Yan, Shengen
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (07) : 986 - 997
  • [50] A calibrated asymptotic framework for analyzing packet classification algorithms on GPUs
    Abbasi, M.
    Rafiee, M.
    JOURNAL OF SUPERCOMPUTING, 2019, 75 (10): : 6574 - 6611