Competitive parallel disk prefetching and buffer management

被引:1
|
作者
Barve, R [1 ]
Kallahalla, M
Varman, PJ
Vitter, JS
机构
[1] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
[2] Rice Univ, Dept Elect & Comp Engn, Houston, TX 77251 USA
来源
JOURNAL OF ALGORITHMS-COGNITION INFORMATICS AND LOGIC | 2000年 / 36卷 / 02期
基金
美国国家科学基金会;
关键词
D O I
10.1006/jagm.2000.1089
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We provide a competitive analysis framework for online prefetching and buffer management algorithms in parallel I/O systems, using a read-once model of black references. This has widespread applicability to key I/O-bound applications such as external merging and concurrent playback of multiple video streams. Two realistic lookahead models, global lookahead and local Lookahead, are defined. Algorithms NOM and GREED, based on these two forms of lookahead are analyzed for shared buffer and distributed buffer configurations, both of which occur frequently in existing systems. An important aspect of our work is that we show how to implement both of the models of lookahead in practice using the simple techniques of forecasting and flushing. Given a D-disk parallel I/O system and a globally shared I/O buffer that can hold up to M disk blocks, we derive a lower bound of Omega(root D) on the competitive ratio of any deterministic online prefetching algorithm with O(M) lookahead. NOM is shown to match the lower bound using global M-block lookahead. In contrast, using only local lookahead results in an Omega(D) competitive ratio. When the buffer is distributed into D portions of M/D blocks each, the algorithm GREED based on local lookahead is shown to be optimal, and NOM is within a constant factor of optimal. Thus we provide a theoretical basis for the intuition that global lookahead is more valuable for prefetching in the case of a shared buffer configuration, whereas it is enough to provide local lookahead in the case of a distributed configuration. Finally, we analyze the performance of these algorithms for reference strings generated by a uniformly-random stochastic process and we show that they achieve the minimal expected number of I/Os. These results also give bounds on the worst-case expected performance of algorithms which employ randomization in the data layout. (C) 2000 Academic Press.
引用
收藏
页码:152 / 181
页数:30
相关论文
共 50 条
  • [21] Competitive Buffer Management with Stochastic Packet Arrivals
    Al-Bawani, Kamal
    Souza, Alexander
    EXPERIMENTAL ALGORITHMS, PROCEEDINGS, 2009, 5526 : 28 - 39
  • [22] An Improved Competitive Algorithm for Reordering Buffer Management
    Avigdor-Elgrabli, Noa
    Rabani, Yuval
    PROCEEDINGS OF THE TWENTY-FIRST ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2010, 135 : 13 - +
  • [23] An Improved Competitive Algorithm for Reordering Buffer Management
    Avigdor-Elgrabli, Noa
    Rabani, Yuval
    ACM TRANSACTIONS ON ALGORITHMS, 2015, 11 (04)
  • [24] Competitive FIFO Buffer Management for Weighted Packets
    Li, Fei
    2009 7TH ANNUAL COMMUNICATION NETWORKS AND SERVICES RESEARCH CONFERENCE, 2009, : 126 - 132
  • [25] Adaptive prefetching algorithm in disk controllers
    Zhu, Qi
    Gelenbe, Erol
    Qiao, Ying
    PERFORMANCE EVALUATION, 2008, 65 (05) : 382 - 395
  • [26] Competitive Buffer Management for Shared-Memory Switches
    Aiello, William
    Kesselman, Alex
    Mansour, Yishay
    ACM TRANSACTIONS ON ALGORITHMS, 2008, 5 (01)
  • [27] A parallel processor architecture for prefetching
    Kim, SM
    Manoharan, S
    I-SPAN 2000: INTERNATIONAL SYMPOSIUM ON PARALLEL ARCHITECTURES ALGORITHMS AND NETWORKS, PROCEEDINGS, 2000, : 254 - 259
  • [28] PRE-BUD: Prefetching for Energy-Efficient Parallel I/O Systems with Buffer Disks
    Manzanares, Adam
    Qin, Xiao
    Ruan, Xiaojun
    Yin, Shu
    ACM TRANSACTIONS ON STORAGE, 2011, 7 (01)
  • [29] Parallel prefetching and caching is hard
    Ambühl, C
    Weber, B
    STACS 2004, PROCEEDINGS, 2004, 2996 : 211 - 221
  • [30] Integrated parallel prefetching and caching
    Univ of Washington, Seattle, United States
    Perform Eval Rev, 1 (262-263):