Evaluation of an MPI-based Implementation of the Tascell Task-Parallel Language on Massively Parallel Systems

被引:3
|
作者
Muraoka, Daisuke [1 ]
Yasugi, Masahiro [2 ]
Hiraishi, Tasuku [3 ]
Umatani, Seiji [4 ]
机构
[1] Kyushu Inst Technol, Grad Sch Comp Sci & Syst Engn, Kitakyushu, Fukuoka, Japan
[2] Kyushu Inst Technol, Dept Artificial Intelligence, Kitakyushu, Fukuoka, Japan
[3] Kyoto Univ, Acad Ctr Comp & Media Studies, Kyoto 6068501, Japan
[4] Kyoto Univ, Grad Sch Infomat, Kyoto 6068501, Japan
关键词
D O I
10.1109/ICPPW.2016.36
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Tascell is a task parallel language that supports distributed memory environments. The conventional implementation of Tascell realizes inter-node communication with TCP/IP communication via Tascell servers. This implementation is suitable for dynamic addition of computation nodes and wide-area distributed environments. On the other hand, in supercomputer environments, TCP/IP may not be available for inter-node communication and there may be no appropriate places for deploying Tascell servers. In this study, we have developed a server-less implementation of Tascell that realizes inter-node communication with MPI communication in order to evaluate its performance on massively parallel systems. It performs well on four Xeon Phi coprocessors (with 456 workers) and the K computer; for instance, our 19-queens solver achieves a 4615-fold speedup relative to a serial implementation with 7168 workers on the K computer. Our server-less implementation realizes deadlock freedom, although it only requires the two-sided communication paradigm and the MPI_THREAD_FUNNELED support level. On Xeon Phi coprocessors, we compare our implementation with other implementations that employ TCP/IP or the MPI_THREAD_MULTIPLE support level.
引用
收藏
页码:161 / 170
页数:10
相关论文
共 50 条
  • [31] A massively parallel implementation of the watershed based on cellular automata
    Noguet, D
    IEEE INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS, PROCEEDINGS, 1997, : 42 - 52
  • [32] An Evaluation of Task-Parallel Frameworks for Sparse Solvers on Multicore and Manycore CPU Architectures
    Alperen, Abdullah
    Afibuzzaman, Md
    Rabbi, Fazlay
    Ozkaya, M. Yusuf
    Catalyurek, Umit
    Aktulga, Hasan Metin
    50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,
  • [33] Towards Unifying OpenMP Under the Task-Parallel Paradigm Implementation and Performance of the taskloop Construct
    Podobas, Artur
    Karlsson, Sven
    OpenMP: Memory, Devices, and Tasks, 2016, 9903 : 116 - 129
  • [34] Task-parallel implementation of 3D shortest path raytracing for geophysical applications
    Giroux, Bernard
    Larouche, Benoit
    COMPUTERS & GEOSCIENCES, 2013, 54 : 130 - 141
  • [35] Scalable mpNoC for massively parallel systems - Design and implementation on FPGA
    Baklouti, M.
    Aydi, Y.
    Marquet, Ph.
    Dekeyser, J. L.
    Abid, M.
    JOURNAL OF SYSTEMS ARCHITECTURE, 2010, 56 (07) : 278 - 292
  • [36] Massively parallel subsystem DFT implementation for molecular and periodic systems
    Genova, Alessandro
    Ceresoli, Davide
    Pavanello, Michele
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2014, 248
  • [37] Leveraging Task-Based Polar Decomposition Using PARSEC on Massively Parallel Systems
    Sukkari, Dalal
    Ltaief, Hatem
    Keyes, David
    Faverge, Mathieu
    2019 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2019, : 69 - 80
  • [38] ARCH, an object oriented MPI-based library for asynchronous and loosely synchronous parallel system programming
    Adamo, JM
    RECENT ADVANCES IN PARALLEL VIRTUAL MACHINE AND MESSAGE PASSING INTERFACE, 1997, 1332 : 67 - 74
  • [39] The CCM based implementation of the parallel variant of BiCG algorithm suitable for massively parallel computing
    Rybarczyk, Andrzej
    Szulc, Michal
    Wencel, Jaroslaw
    PAR ELEC 2006: INTERNATIONAL SYMPOSIUM ON PARALLEL COMPUTING IN ELECTRICAL ENGINEERING, PROCEEDINGS, 2006, : 301 - +
  • [40] Distributed SILC: An easy-to-use interface for MPI-based parallel matrix computation libraries
    Kajiyama, Tamito
    Nukada, Akira
    Suda, Reiji
    Hasegawa, Hidehiko
    Nishida, Akira
    APPLIED PARALLEL COMPUTING: STATE OF THE ART IN SCIENTIFIC COMPUTING, 2007, 4699 : 860 - +