Speculative parallelization

被引:1
|
作者
Gonzalez-Escribano, Arturo [1 ]
Llanos, Diego R. [1 ]
机构
[1] Univ Valladolid, Dept Informat, E-47002 Valladolid, Spain
关键词
How things work; Speculative parallelization;
D O I
10.1109/MC.2006.441
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The most promising technique for automatically parallelizing loops when the system cannot determine dependences at compile time is speculative parallelization. Also called thread-level speculation, this technique assumes optimistically that the system can execute all iterations of a given loop in parallel. A hardware or software monitor divides the iterations into blocks and assigns them to different threads, one per processor, with no prior dependence analysis. If the system discovers a dependence violation at runtime, it stops the incorrectly computed work and restarts it with correct values. Of course, the more parallel the loop, the more benefits this technique delivers. To better understand how speculative parallelization works, it is necessary to distinguish between private and shared variables. Informally speaking, private variables are those that the program always modifies in each iteration before using them. On the other hand, values stored in shared variables are used in different iterations. © 2006 IEEE.
引用
收藏
页码:126 / 128
页数:3
相关论文
共 50 条
  • [1] Fastpath Speculative Parallelization
    Spear, Michael F.
    Kelsey, Kirk
    Bai, Tongxin
    Dalessandro, Luke
    Scott, Michael L.
    Ding, Chen
    Wu, Peng
    LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING, 2010, 5898 : 338 - +
  • [2] Speculative Parallelization on GPGPUs
    Feng, Min
    Gupta, Rajiv
    Bhuyan, Laximi N.
    ACM SIGPLAN NOTICES, 2012, 47 (08) : 293 - 294
  • [3] Runtime Automatic Speculative Parallelization
    Hertzberg, Ben
    Olukotun, Kunle
    2011 9TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION (CGO), 2011, : 64 - 73
  • [4] Master/Slave Speculative Parallelization
    Zilles, C
    Sohi, G
    35TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO-35), PROCEEDINGS, 2002, : 85 - 96
  • [5] Exploiting postdominance for speculative parallelization
    Agarwal, Mayank
    Malik, Kshitiz
    Woley, Kevin M.
    Stone, Sam S.
    Frank, Matthew I.
    THIRTEENTH INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, PROCEEDINGS, 2007, : 295 - +
  • [6] Moody Scheduling for Speculative Parallelization
    Estebanez, Alvaro
    Llanos, Diego R.
    Orden, David
    Palop, Belen
    EURO-PAR 2015: PARALLEL PROCESSING, 2015, 9233 : 135 - 146
  • [7] Speculative Parallelization of Sequential Loops on Multicores
    Chen Tian
    Min Feng
    Vijay Nagarajan
    Rajiv Gupta
    International Journal of Parallel Programming, 2009, 37 : 508 - 535
  • [8] Speculative Parallelization of Partial Reduction Variables
    Han, Liang
    Liu, Wei
    Tuck, James M.
    CGO 2010: THE EIGHTH INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, PROCEEDINGS, 2010, : 141 - +
  • [9] Speculative parallelization of partially parallel loops
    Dang, FH
    Rauchwerger, L
    LANGUAGES, COMPILERS, AND RUN-TIME SYSTEMS FOR SCALABLE COMPUTERS, 2000, 1915 : 285 - 299
  • [10] Speculative parallelization of multipath radiosity algorithm
    Trias, A.
    Puiggali, J.
    Castro, F.
    Jove, T.
    Sbert, M.
    Marzo, J. L.
    PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON PERFORMANCE EVALUATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS, 2009, 41 (04): : 89 - 95