Flash-Aware High-Performance and Endurable Cache

被引:5
|
作者
Xia, Qianbin [1 ]
Xiao, Weijun [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Elect & Comp Engn, Richmond, VA 23284 USA
关键词
Flash memory; Out-of-place update; Read cache; LRU;
D O I
10.1109/MASCOTS.2015.22
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Flash-based SSDs are widely used as storage caches, which can benefit from both the higher performance of SSDs and lower price of disks. Unfortunately, issues of reliability and limited lifetime limit the use of Flash-based cache. One way to solve this problem is to use the flash memory as read cache and use other devices like nonvolatile memory for write buffering. In this paper, we propose a new flash-aware read cache architecture, which leverages out-of-place update property of flash memory to improve both cache hit ratio and lifetime. Due to the out-of-place update property, when a cache entry is evicted from the flash cache, the eviction only removes the metadata, while the real data is still accessible and resides in the physical flash page until the whole flash block being erased. The main idea of our flash-aware cache is to reuse these evicted but still available data, when a request for the previously evicted data arrives, instead of accessing underlying storage to fetch the data and rewriting it into flash cache, we just need to revive the evicted data. To evaluate the benefits of flash-aware cache design, we implemented the normal LRU and flash-aware LRU (FLRU) cache algorithms on the Disksim simulator with an SSD extension. Our simulation results demonstrate that our flash-aware cache can improve the cache hit ratio by up to 28% and alleviate the lifetime limitation of flash cache by reducing the erase count by up to 70%.
引用
收藏
页码:47 / 50
页数:4
相关论文
共 50 条
  • [31] Adaptive cache compression for high-performance processors
    Alameldeen, AR
    Wood, DA
    31ST ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, PROCEEDINGS, 2004, : 212 - 223
  • [32] A design for high-performance flash disks
    Birrell, Andrew
    Isard, Michael
    Thacker, Chuck
    Wobber, Ted
    Operating Systems Review (ACM), 2007, 41 (02): : 88 - 93
  • [33] IOb-Cache: A High-Performance Configurable Open-Source Cache
    Roque, Joao, V
    Lopes, Joao D.
    Vestias, Mario P.
    de Sousa, Jose T.
    ALGORITHMS, 2021, 14 (08)
  • [34] Cache-oblivious High-performance Similarity Join
    Perdacher, Martin
    Plant, Claudia
    Boehm, Christian
    SIGMOD '19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2019, : 87 - 104
  • [35] ACDC: Small, Predictable and High-Performance Data Cache
    Segarra, Juan
    Rodriguez, Clemente
    Gran, Ruben
    Aparicio, Luis C.
    Vinals, Victor
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2015, 14 (02) : 38
  • [36] SCP: Shared Cache Partitioning for High-Performance GEMM
    Su, Xing
    Liao, Xiangke
    Jiang, Hao
    Yang, Canqun
    Xue, Jingling
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2019, 15 (04)
  • [37] High-Performance with an In-GPU Graph Database Cache
    Morishima, Shin
    Matsutani, Hiroki
    IT PROFESSIONAL, 2017, 19 (06) : 58 - 64
  • [38] PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance
    Pang, Shujie
    Deng, Yuhui
    Zhang, Genxiong
    Zhou, Yi
    Huang, Yaoqin
    Qin, Xiao
    ACM TRANSACTIONS ON STORAGE, 2023, 19 (02)
  • [39] Flashy Prefetching for High-Performance Flash Drives
    Uppal, Ahsen J.
    Chiang, Ron C.
    Huang, H. Howie
    2012 IEEE 28TH SYMPOSIUM ON MASS STORAGE SYSTEMS AND TECHNOLOGIES (MSST), 2012,
  • [40] MARS: Mobile Application Relaunching Speed-Up through Flash-Aware Page Swapping
    Guo, Weichao
    Chen, Kang
    Feng, Huan
    Wu, Yongwei
    Zhang, Rui
    Zheng, Weimin
    IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (03) : 916 - 928