LAC: A Workload Intensity-Aware Caching Scheme for High-Performance SSDs

被引:0
|
作者
Sun, Hui [1 ]
Tong, Haoqiang [1 ]
Yue, Yinliang [2 ]
Qin, Xiao [3 ]
机构
[1] Anhui Univ, Sch Comp Sci & Technol, Hefei 230201, Peoples R China
[2] Zhongguancun Lab, Beijing 100049, Peoples R China
[3] Auburn Univ, Dept Comp Sci & Software Engn, Auburn, AL 36849 USA
基金
中国国家自然科学基金;
关键词
Flash memories; Time factors; Costs; Tail; Writing; Random access memory; Delays; Caching scheme; I/O-intensity awareness; parallel write; die-level monitor; solid state disk; NAND flash; BUFFER MANAGEMENT SCHEME; GARBAGE COLLECTION; FLASH; LRU;
D O I
10.1109/TC.2024.3385290
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests - a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a workLoad intensity-aware and Active parallel Caching scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os' access conflict under high-I/O-intensity workloads. If the I/O intensity is low - intervals between consecutive I/O requests are large - and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.
引用
收藏
页码:1738 / 1752
页数:15
相关论文
共 50 条
  • [31] High-performance IP routing table lookup using CPU caching
    Chiueh, TC
    Pradhan, P
    IEEE INFOCOM '99 - THE CONFERENCE ON COMPUTER COMMUNICATIONS, VOLS 1-3, PROCEEDINGS: THE FUTURE IS NOW, 1999, : 1421 - 1428
  • [32] Flash-Aware High-Performance and Endurable Cache
    Xia, Qianbin
    Xiao, Weijun
    2015 IEEE 23RD INTERNATIONAL SYMPOSIUM ON MODELING, ANALYSIS, AND SIMULATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS (MASCOTS 2015), 2015, : 47 - 50
  • [33] Early Experience with Optimizing I/O Performance Using High-Performance SSDs for In-Memory Cluster Computing
    Choi, I. Stephen
    Yang, Weiqing
    Kee, Yang-Suk
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, 2015, : 1073 - 1083
  • [34] LBA Scrambler: A NAND Flash Aware Data Management Scheme for High-Performance Solid-State Drives
    Sun, Chao
    Soga, Ayumi
    Matsui, Chihiro
    Arakawa, Asuka
    Takeuchi, Ken
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 24 (01) : 115 - 128
  • [35] To Collect or Not to Collect: Just-in-Time Garbage Collection for High-Performance SSDs with Long Lifetimes
    Hahn, Sangwook Shane
    Lee, Sungjin
    Kim, Jihong
    2015 52ND ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2015,
  • [36] Low-Complexity High-Performance Cyclic Caching for Large MISO Systems
    Salehi, MohammadJavad
    Parrinello, Emanuele
    Shariatpanahi, Seyed Pooya
    Elia, Petros
    Tolli, Antti
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (05) : 3263 - 3278
  • [37] NetMQ: High-performance In-network Caching for Message Queues with Programmable Switches
    Ma, Junte
    Xie, Sihao
    Zhao, Jin
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4595 - 4600
  • [38] POPULAR HIGH-PERFORMANCE SEGMENT-BASED PROXY CACHING FOR STREAMING MEDIA
    Zhang Yanming
    Yang Xudong
    PROCEEDINGS OF 2009 2ND IEEE INTERNATIONAL CONFERENCE ON BROADBAND NETWORK & MULTIMEDIA TECHNOLOGY, 2009, : 188 - 193
  • [39] High-Performance and Endurable Cache Management for Flash-Based Read Caching
    Xia, Qianbin
    Xiao, Weijun
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2016, 27 (12) : 3518 - 3531
  • [40] High-performance docker integration scheme based on OpenStack
    Sijie Yang
    Xiaofeng Wang
    Xiaoxue Wang
    Lun An
    Guizhu Zhang
    World Wide Web, 2020, 23 : 2593 - 2632