Outperforming LRU with an adaptive replacement cache algorithm

被引:129
|
作者
Megiddo, N
Modha, DS
机构
[1] IBM Almaden Research Center, San Jose, CA
关键词
D O I
10.1109/MC.2004.1297303
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
C aching, a fundamental metaphor in modern computing, finds wide application in storage systems,(1) databases, Web servers, middleware, processors, file systems, disk drives, redundant array of independent disks controllers, operating systems, and other applications such as data compression and list updating.(2) In a two-level memory hierarchy, a cache performs faster than auxiliary storage, but it is more expensive. Cost concerns thus usually limit cache size to a fraction of the auxiliary memory's size. Both cache and auxiliary memory handle uniformly sized items called pages. Requests for pages go first to the cache. When a page is found in the cache, a hit occurs; otherwise, a cache miss happens, and the request goes to the auxiliary memory. In the latter case, a copy is paged in to the cache. This practice, called demand paging, rules out prefetching pages from the auxiliary memory into the cache. If the cache is full, before the system can page in a new page, it must page out one of the currently cached pages. A replacement policy determines which page is evicted. A commonly used criterion for evaluating a replacement policy is its bit ratio-the frequency with which it finds a page in the cache. Of course, the replacement policy's implementation overhead should not exceed the anticipated time savings. Discarding the least-recently-used page is the policy of choice in cache management. Until recently, attempts to outperform LRU in practice had not succeeded because of overhead issues and the need to pretune parameters. The adaptive replacement cache is a self-tuning, low-overhead algorithm that responds online to changing access patterns. ARC continually balances between the recency and frequency features of the workload, demonstrating that adaptation eliminates the need for the workload-specific pretuning that plagued many previous proposals to improve LRU. ARC's online adaptation will likely have benefits for real-life workloads due to their richness and variability with time. These workloads can contain long sequential I/Os or moving hot spots, changing frequency and scale of temporal locality and fluctuating between stable, repeating access patterns and patterns with transient clustered references. Like LRU, ARC is easy to implement, and its running time per request is essentially independent of the cache size. A real-life implementation revealed that ARC has a low space overhead-0.75 percent of the cache size. Also, unlike LRU, ARC is scanresistant in that it allows one-time sequential requests to pass through without polluting the cache or flushing pages that have temporal locality. Likewise, ARC also effectively handles long periods of low temporal locality. ARC leads to substantial performance gains in terms of an improved hit ratio compared with LRU for a wide range of cache sizes.
引用
收藏
页码:58 / +
页数:9
相关论文
共 50 条
  • [21] Efficient LRU algorithm for cache scheduling in a disk array system
    Jin, Hai
    Hwang, Kai
    International Journal of Computers and Applications, 2000, 22 (03) : 134 - 139
  • [22] The Minimizating of Hardware for Implementation of Pseudo LRU Algorithm for Cache Memory
    Puidenko, Vadym
    Kharchenko, Vyacheslav
    2020 IEEE 11TH INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS, SERVICES AND TECHNOLOGIES (DESSERT): IOT, BIG DATA AND AI FOR A SAFE & SECURE WORLD AND INDUSTRY 4.0, 2020, : 65 - 71
  • [23] Estimating neural networks-based algorithm for adaptive cache replacement
    Obaidat, MS
    Khalid, H
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 1998, 28 (04): : 602 - 611
  • [24] Dueling CLOCK: Adaptive Cache Replacement Policy Based on The CLOCK Algorithm
    Janapsatya, Andhi
    Ignjatovic, Aleksandar
    Peddersen, Jorgen
    Sri Parameswaran
    2010 DESIGN, AUTOMATION & TEST IN EUROPE (DATE 2010), 2010, : 920 - 925
  • [25] Modeling LRU cache with invalidation
    Detti, Andrea
    Bracciale, Lorenzo
    Loreti, Pierpaolo
    Melazzi, Nicola Blefari
    COMPUTER NETWORKS, 2018, 134 : 55 - 65
  • [26] LRU-assist: An efficient algorithm for cache leakage power controlling
    Lab. 610, School of Computer, National University of Defense Technology, Changsha 410073, China
    Tien Tzu Hsueh Pao, 2006, 9 (1626-1630):
  • [27] LR-LRU: A PACS-Oriented Intelligent Cache Replacement Policy
    Wang, Yinyin
    Yang, Yuwang
    Han, Chen
    Ye, Lei
    Ke, Yaqi
    Wang, Qingguang
    IEEE ACCESS, 2019, 7 : 58073 - 58084
  • [28] Pseudo-FIFO architecture of LRU replacement algorithm
    Ghasemzadeh, Hassan
    Fatemi, Seyed Omid
    PROCEEDINGS OF THE INMIC 2005: 9TH INTERNATIONAL MULTITOPIC CONFERENCE - PROCEEDINGS, 2005, : 20 - 26
  • [29] FPGA implementation of simplified Fuzzy LRU replacement algorithm
    Titinchi, Ali A.
    Halasa, Nasser
    2019 16TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS & DEVICES (SSD), 2019, : 657 - 662
  • [30] AdaptiveClimb - Adaptive Policy for Cache Replacement
    Berend, Daniel
    Dolev, Shlomi
    Kogan-Sadetsky, Marina
    SYSTOR '19: PROCEEDINGS OF THE 12TH ACM INTERNATIONAL SYSTEMS AND STORAGE CONFERENCE, 2019, : 187 - 187