Integrated prefetching and caching in single and parallel systems

被引:6
|
作者
Albers, S [1 ]
Büttner, M [1 ]
机构
[1] Univ Freiburg, Inst Informat, D-79110 Freiburg, Germany
关键词
magnetic disks; prefetching; caching; approximation algorithms; linear program;
D O I
10.1016/j.ic.2005.01.003
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We study integrated prefetching and caching in single and parallel disk systems. In the first part of the paper, we investigate approximation algorithms for the single disk problem. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time. We give a refined analysis of the Aggressive algorithm, improving the original analysis by Cao et al. We prove that our new bound is tight. Additionally, we present a new family of prefetching and caching strategies and give algorithms that perform better than Aggressive and Conservative. In the second part of the paper, we investigate the problem of minimizing stall time in parallel disk systems. We present a polynomial time algorithm for computing a prefetching/caching schedule whose stall time is bounded by that of an optimal solution. The schedule uses at most 2(D - 1) extra memory locations in cache. This is the first polynomial time algorithm that, using a small amount of extra resources, computes schedules whose stall times are bounded by that of optimal schedules not using extra resources. Our algorithm is based on the linear programming approach of [Journal of the ACM 47 (2000) 969]. However, in order to achieve minimum stall times, we introduce the new concept of synchronized schedules in which fetches on the D disks are performed completely in parallel. (c) 2005 Elsevier Inc. All rights reserved.
引用
收藏
页码:24 / 39
页数:16
相关论文
共 50 条
  • [31] Integrated document caching and prefetching in storage hierarchies based on Markov-chain predictions
    Kraiss, A
    Weikum, G
    VLDB JOURNAL, 1998, 7 (03): : 141 - 162
  • [32] Caching and prefetching for Web content distribution
    Li, B
    Lia, XH
    COMPUTING IN SCIENCE & ENGINEERING, 2004, 6 (04) : 54 - 59
  • [33] Asynchronous Coded Caching With Uncoded Prefetching
    Ghasemi, Hooshang
    Ramamoorthy, Aditya
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2020, 28 (05) : 2146 - 2159
  • [34] Caching and Prefetching for Improving ORAM Performance
    Hayashibara, Naohiro
    Kawabata, Kazuaki
    2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W 2024, 2024, : 17 - 20
  • [35] Advanced prefetching and caching of models with PrefetchML
    Gwendal Daniel
    Gerson Sunyé
    Jordi Cabot
    Software & Systems Modeling, 2019, 18 : 1773 - 1794
  • [36] Fundamental Limits of Coded Caching: From Uncoded Prefetching to Coded Prefetching
    Zhang, Kai
    Tian, Chao
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2018, 36 (06) : 1153 - 1164
  • [37] PrefetchML: a Framework for Prefetching and Caching Models
    Daniel, Gwendal
    Sunye, Gerson
    Cabot, Jordi
    19TH ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS (MODELS'16), 2016, : 318 - 328
  • [38] Prefetching and Caching for Minimizing Service Costs
    Quan G.
    Eryilmaz A.
    Tan J.
    Shroff N.
    Performance Evaluation Review, 2021, 48 (03): : 77 - 78
  • [39] A prefetching scheme for energy conservation in parallel disk systems
    Manzanares, Adam
    Bellani, Kiramnai
    Qin, Xiao
    2008 IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL & DISTRIBUTED PROCESSING, VOLS 1-8, 2008, : 2651 - 2655
  • [40] A data-mining-based prefetching approach to caching for network storage systems
    Fang, Xiao
    Sheng, Olivia R. Liu
    Gao, Wei
    Iyer, Balakrishna R.
    INFORMS JOURNAL ON COMPUTING, 2006, 18 (02) : 267 - 282