Reinforcement Learning for Load-Balanced Parallel Particle Tracing

被引:1
|
作者
Xu, Jiayi [1 ]
Guo, Hanqi [2 ]
Shen, Han-Wei [1 ]
Raj, Mukund [3 ]
Wurster, Skylar W. [1 ]
Peterka, Tom [2 ]
机构
[1] Ohio State Univ, Dept Comp Sci & Engn, Columbus, OH 43210 USA
[2] Argonne Natl Lab, Math & Comp Sci Div, Lemont, IL 60439 USA
[3] Broad Inst MIT & Harvard, Stanley Ctr Psychiat Res, Cambridge, MA 02142 USA
基金
美国国家科学基金会;
关键词
Costs; Heuristic algorithms; Estimation; Load modeling; Data models; Computational modeling; Adaptation models; Distributed and parallel particle tracing; dynamic load balancing; reinforcement learning; COLLECTIVE COMMUNICATION; MODEL; VISUALIZATION; ALGORITHMS; ADVECTION; MPI;
D O I
10.1109/TVCG.2022.3148745
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We explore an online reinforcement learning (RL) paradigm to dynamically optimize parallel particle tracing performance in distributed-memory systems. Our method combines three novel components: (1) a work donation algorithm, (2) a high-order workload estimation model, and (3) a communication cost model. First, we design an RL-based work donation algorithm. Our algorithm monitors workloads of processes and creates RL agents to donate data blocks and particles from high-workload processes to low-workload processes to minimize program execution time. The agents learn the donation strategy on the fly based on reward and cost functions designed to consider processes' workload changes and data transfer costs of donation actions. Second, we propose a workload estimation model, helping RL agents estimate the workload distribution of processes in future computations. Third, we design a communication cost model that considers both block and particle data exchange costs, helping RL agents make effective decisions with minimized communication costs. We demonstrate that our algorithm adapts to different flow behaviors in large-scale fluid dynamics, ocean, and weather simulation data. Our algorithm improves parallel particle tracing performance in terms of parallel efficiency, load balance, and costs of I/O and communication for evaluations with up to 16,384 processors.
引用
收藏
页码:3052 / 3066
页数:15
相关论文
共 50 条
  • [41] Load-balanced anycast routing in computer networks
    Zaumen, William T.
    Vutukury, Srinivas
    Garcia-Luna-Aceves, J.J.
    IEEE Symposium on Computers and Communications - Proceedings, 2000, : 566 - 574
  • [42] Load-balanced three-stage switch
    Hu, Bing
    Yeung, Kwan L.
    Zhang, Zhaoyang
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2012, 35 (01) : 502 - 509
  • [43] Efficient Load-Balanced Butterfly Counting on GPU
    Xu, Qingyu
    Zhang, Feng
    Yao, Zhiming
    Lu, Lv
    Du, Xiaoyong
    Deng, Dong
    He, Bingsheng
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (11): : 2450 - 2462
  • [44] A load-balanced switch with an arbitrary number of linecards
    Keslassy, I
    Chuang, ST
    McKeown, N
    IEEE INFOCOM 2004: THE CONFERENCE ON COMPUTER COMMUNICATIONS, VOLS 1-4, PROCEEDINGS, 2004, : 2007 - 2016
  • [45] A Technique for Load-Balanced Management of Security Tasks Load in Grids
    AlHuwaishel, Najlaa
    Zaghloul, Soha S.
    AFRICON, 2013, 2013, : 630 - 634
  • [46] Load-balanced anycast with minimal average delay
    2005, National Dong Hwa University (06):
  • [47] Capacity Provisioning a Valiant Load-Balanced Network
    Curtis, Andrew R.
    Lopez-Ortiz, Alejandro
    IEEE INFOCOM 2009 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, VOLS 1-5, 2009, : 3006 - 3010
  • [48] Load-balanced anycast routing in computer networks
    Zaumen, WT
    Vutukury, S
    Garcia-Luna-Aceves, JJ
    ISCC 2000: FIFTH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, PROCEEDINGS, 2000, : 566 - 574
  • [49] Load-balanced routing for collaborative multimedia communication
    Simon, R
    Sood, A
    SIXTH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE DISTRIBUTED COMPUTING, PROCEEDINGS, 1997, : 81 - 90
  • [50] Load-balanced differentiated services support switch
    Hu, H.
    Guo, Y.
    Yi, P.
    Chen, S.
    IET COMMUNICATIONS, 2011, 5 (13) : 1895 - 1906