Outperforming LRU with an adaptive replacement cache algorithm

被引:129
|
作者
Megiddo, N
Modha, DS
机构
[1] IBM Almaden Research Center, San Jose, CA
关键词
D O I
10.1109/MC.2004.1297303
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
C aching, a fundamental metaphor in modern computing, finds wide application in storage systems,(1) databases, Web servers, middleware, processors, file systems, disk drives, redundant array of independent disks controllers, operating systems, and other applications such as data compression and list updating.(2) In a two-level memory hierarchy, a cache performs faster than auxiliary storage, but it is more expensive. Cost concerns thus usually limit cache size to a fraction of the auxiliary memory's size. Both cache and auxiliary memory handle uniformly sized items called pages. Requests for pages go first to the cache. When a page is found in the cache, a hit occurs; otherwise, a cache miss happens, and the request goes to the auxiliary memory. In the latter case, a copy is paged in to the cache. This practice, called demand paging, rules out prefetching pages from the auxiliary memory into the cache. If the cache is full, before the system can page in a new page, it must page out one of the currently cached pages. A replacement policy determines which page is evicted. A commonly used criterion for evaluating a replacement policy is its bit ratio-the frequency with which it finds a page in the cache. Of course, the replacement policy's implementation overhead should not exceed the anticipated time savings. Discarding the least-recently-used page is the policy of choice in cache management. Until recently, attempts to outperform LRU in practice had not succeeded because of overhead issues and the need to pretune parameters. The adaptive replacement cache is a self-tuning, low-overhead algorithm that responds online to changing access patterns. ARC continually balances between the recency and frequency features of the workload, demonstrating that adaptation eliminates the need for the workload-specific pretuning that plagued many previous proposals to improve LRU. ARC's online adaptation will likely have benefits for real-life workloads due to their richness and variability with time. These workloads can contain long sequential I/Os or moving hot spots, changing frequency and scale of temporal locality and fluctuating between stable, repeating access patterns and patterns with transient clustered references. Like LRU, ARC is easy to implement, and its running time per request is essentially independent of the cache size. A real-life implementation revealed that ARC has a low space overhead-0.75 percent of the cache size. Also, unlike LRU, ARC is scanresistant in that it allows one-time sequential requests to pass through without polluting the cache or flushing pages that have temporal locality. Likewise, ARC also effectively handles long periods of low temporal locality. ARC leads to substantial performance gains in terms of an improved hit ratio compared with LRU for a wide range of cache sizes.
引用
收藏
页码:58 / +
页数:9
相关论文
共 50 条
  • [31] Adaptive Cache Replacement: A Novel Approach
    Elfayoumy, Sherif
    Warden, Sean
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2014, 5 (07) : 105 - 111
  • [32] Performance of LRU block replacement algorithm with pre-fetching
    Pendse, R
    Bhagavathula, R
    1998 MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, PROCEEDINGS, 1999, : 86 - 89
  • [33] An optimality proof of the LRU-K page replacement algorithm
    O'Neil, EJ
    O'Neil, PE
    Weikum, G
    JOURNAL OF THE ACM, 1999, 46 (01) : 92 - 112
  • [34] Modeling Cache Performance Beyond LRU
    Beckmann, Nathan
    Sanchez, Daniel
    PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA-22), 2016, : 225 - 236
  • [35] Lossless Compression of LRU Cache Stack
    Sim, Mong Tee
    2019 INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2019, : 271 - 272
  • [36] A Novel Cache Replacement Algorithm of EPCIS
    Chen, Wei
    Gong, Yongwang
    Gao, Zhi
    Xu, Xiufang
    Shao, Xing
    Han, Limao
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON LOGISTICS, ENGINEERING, MANAGEMENT AND COMPUTER SCIENCE (LEMCS 2015), 2015, 117 : 1295 - 1298
  • [37] A new web cache replacement algorithm
    Bhattacharjee, A
    Debnath, BK
    2005 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS AND SIGNAL PROCESSING (PACRIM), 2005, : 420 - 423
  • [38] FUZZY REPLACEMENT ALGORITHM FOR CACHE MEMORY
    HOSSAIN, A
    MARUDARAJAN, AR
    MANZOUL, MA
    CYBERNETICS AND SYSTEMS, 1991, 22 (06) : 733 - 746
  • [39] A new cache replacement algorithm in SMO
    Li, JM
    Zhang, B
    Lin, FZ
    PATTERN RECOGNITION WITH SUPPORT VECTOR MACHINES, PROCEEDINGS, 2002, 2388 : 342 - 353
  • [40] CACHE LINE REPLACEMENT ALGORITHM.
    Anon
    1890, (29):