Optimizing GPU Cache Policies for MI Workloads

被引:0
|
作者
Alsop, Johnathan [1 ]
Sinclair, Matthew D. [1 ,2 ]
Bharadwaj, Srikant [1 ]
Dutu, Alexandru [1 ]
Gutierrez, Anthony [1 ]
Kayiran, Onur [1 ]
LeBeane, Michael [1 ]
Potter, Brandon [1 ]
Puthoor, Sooraj [1 ,2 ]
Zhang, Xianwei [1 ]
Yeh, Tsung Tai [3 ]
Beckmann, Bradford M. [1 ]
机构
[1] AMD Res, Urbana, IL 61801 USA
[2] Univ Wisconsin, Madison, WI 53706 USA
[3] Purdue Univ, W Lafayette, IN 47907 USA
关键词
execution; driven simulation; GPU caching; machine intelligence; machine learning;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In recent years, machine intelligence (MI) applications have emerged as a major driver for the computing industry. Optimizing these workloads is important, but complicated. As memory demands grow and data movement overheads increasingly limit performance, determining the best GPU caching policy to use for a diverse range of MI workloads represents one important challenge. To study this, we evaluate 17 MI applications and characterize their behavior using a range of GPU caching strategies. In our evaluations, we find that the choice of caching policy in GPU caches involves multiple performance trade-offs and interactions, and there is no one-size-fits-all GPU caching policy for MI workloads. Based on detailed simulation results, we motivate and evaluate a set of cache optimizations that consistently match the performance of the best static GPU caching policies.
引用
收藏
页码:243 / 248
页数:6
相关论文
共 50 条
  • [31] Using GPU to Accelerate Cache Simulation
    Wan Han
    Gao Xiaopeng
    Wang Zhiqiang
    Li Yi
    2009 IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS, PROCEEDINGS, 2009, : 565 - 570
  • [32] Analyzing CUDA Workloads Using a Detailed GPU Simulator
    Bakhoda, Ali
    Yuan, George L.
    Fung, Wilson W. L.
    Wong, Henry
    Aamodt, Tor M.
    ISPASS 2009: IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE, 2009, : 163 - 174
  • [33] GPU-Initiated Resource Allocation for Irregular Workloads
    Turimbetov, Ilyas
    Sasongko, Muhammad Aditya
    Unat, Didem
    PROCEEDINGS OF 2024 3RD INTERNATIONAL WORKSHOP ON EXTREME HETEROGENEITY SOLUTIONS, EXHET 2024, 2024, : 1 - 8
  • [34] Understanding of GPU Architectural Vulnerability for Deep Learning Workloads
    Santoso, Danny
    Jeon, Hyeran
    2019 IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI AND NANOTECHNOLOGY SYSTEMS (DFT), 2019,
  • [35] Cache Reuse Aware Replacement Policy for Improving GPU Cache Performance
    Son, Dong Oh
    Kim, Gwang Bok
    Kim, Jong Myon
    Kim, Cheol Hong
    IT CONVERGENCE AND SECURITY 2017, VOL 2, 2018, 450 : 127 - 133
  • [36] The Impact of Cache Inclusion Policies on Cache Management Techniques
    Backes, Luna
    Jimenez, Daniel A.
    MEMSYS 2019: PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MEMORY SYSTEMS, 2019, : 428 - 438
  • [37] Virtual-Cache: A cache-line borrowing technique for efficient GPU cache architectures
    Li, Bingchao
    Wei, Jizeng
    Kim, Nam Sung
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 85
  • [38] ROBUS: Fair Cache Allocation for Data-parallel Workloads
    Kunjir, Mayuresh
    Fain, Brandon
    Munagala, Kamesh
    Babu, Shivnath
    SIGMOD'17: PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2017, : 219 - 234
  • [39] Performance evaluation of a novel CMP cache structure for hybrid workloads
    Zhao, Xuemei
    Sammut, Karl
    He, Fangpo
    EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS AND TECHNOLOGIES, PROCEEDINGS, 2007, : 89 - 96
  • [40] CONTRASTING CHARACTERISTICS AND CACHE PERFORMANCE OF TECHNICAL AND MULTIUSER COMMERCIAL WORKLOADS
    MAYNARD, AMG
    DONNELLY, CM
    OLSZEWSKI, BR
    SIGPLAN NOTICES, 1994, 29 (11): : 145 - 156