Revisiting Virtual L1 Caches A Practical Design Using Dynamic Synonym Remapping

被引:0
|
作者
Yoon, Hongil [1 ]
Sohi, Gurindar S. [1 ]
机构
[1] Univ Wisconsin Madison, Dept Comp Sci, Madison, WI 53706 USA
关键词
ADDRESS CACHES; MEMORY; PERFORMANCE; BUFFER;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Virtual caches have potentially lower access latency and energy consumption than physical caches because they do not consult the TLB prior to cache access. However, they have not been popular in commercial designs. The crux of the problem is the possibility of synonyms. This paper makes several empirical observations about the temporal characteristics of synonyms, especially in caches of sizes that are typical of L1 caches. By leveraging these observations, the paper proposes a practical design of an L1 virtual cache that (1) dynamically decides a unique virtual page number for all the synonymous virtual pages that map to the same physical page and (2) uses this unique page number to place and look up data in the virtual caches. Accesses to this unique page number proceed without any intervention. Accesses to other synonymous pages are dynamically detected, and remapped to the corresponding unique virtual page number to correctly access data in the cache. Such remapping operations are rare, due to the temporal properties of synonyms, allowing a Virtual Cache with Dynamic Synonym Remapping (VC-DSR) to achieve most of the benefits of virtual caches but without software involvement. Experimental results based on real world applications show that VC-DSR can achieve about 92% of the dynamic energy savings for TLB lookups, and 99.4% of the latency benefits of ideal (but impractical) virtual caches for the configurations considered.
引用
收藏
页码:212 / 224
页数:13
相关论文
共 50 条
  • [1] Reducing Static and Dynamic Power of L1 Data Caches in GPGPUs
    Atoofian, Ehsan
    PROCEEDINGS OF 2014 IEEE INTERNATIONAL PARALLEL & DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2014, : 799 - 805
  • [2] Analyzing and Leveraging Decoupled L1 Caches in GPUs
    Ibrahim, Mohamed Assem
    Kayiran, Onur
    Eckert, Yasuko
    Loh, Gabriel H.
    Jog, Adwait
    2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 467 - 478
  • [3] Analyzing and Leveraging Shared L1 Caches in GPUs
    Ibrahim, Mohamed Assem
    Kayiran, Onur
    Eckert, Yasuko
    Loh, Gabriel H.
    Jog, Adwait
    PACT '20: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, 2020, : 161 - 173
  • [4] An Efficient Temporal Data Prefetcher for L1 Caches
    Bakhshalipour, Mohammad
    Lotfi-Kamran, Pejman
    Sarbazi-Azad, Hamid
    IEEE COMPUTER ARCHITECTURE LETTERS, 2017, 16 (02) : 99 - 102
  • [5] WCET Analysis of GPU L1 Data Caches
    Huangfu, Yijie
    Zhang, Wei
    2018 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2018,
  • [6] Design, Performance, and Energy Consumption of eDRAM/SRAM Macrocells for L1 Data Caches
    Valero, Alejandro
    Petit, Salvador
    Sahuquillo, Julio
    Lopez, Pedro
    Duato, Jose
    IEEE TRANSACTIONS ON COMPUTERS, 2012, 61 (09) : 1231 - 1242
  • [7] A Study of Runtime Adaptive Prefetching for STTRAM L1 Caches
    Kuan, Kyle
    Adegbija, Tosiron
    2020 IEEE 38TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2020), 2020, : 247 - 254
  • [8] Power-Aware L1 and L2 Caches for GPGPUs
    Atoofian, Ehsan
    Manzak, Ali
    EURO-PAR 2014 PARALLEL PROCESSING, 2014, 8632 : 354 - 365
  • [9] TLB Shootdown Mitigation for Low-Power Many-Core Servers with L1 Virtual Caches
    Binh Pham
    Hower, Derek
    Bhattacharjee, Abhishek
    Cain, Trey
    IEEE COMPUTER ARCHITECTURE LETTERS, 2018, 17 (01) : 17 - 20
  • [10] Aging mitigation of L1 cache by exchanging instruction and data caches
    Sadeghi, Mohammad
    Nikmehr, Hooman
    INTEGRATION-THE VLSI JOURNAL, 2018, 62 : 68 - 75