CARE: A Concurrency-Aware Enhanced Lightweight Cache Management Framework

被引:2
作者
Lu, Xiaoyang [1 ]
Wang, Rujia [1 ]
Sun, Xian-He [1 ]
机构
[1] IIT, Dept Comp Sci, Chicago, IL 60616 USA
来源
2023 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA | 2023年
基金
美国国家科学基金会;
关键词
REPLACEMENT; OPTIMIZATION; PREDICTION; POLICIES;
D O I
10.1109/HPCA56546.2023.10071125
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Improving cache performance is a lasting research topic. While utilizing data locality to enhance cache performance becomes more and more difficult, data access concurrency provides a new opportunity for cache performance optimization. In this work, we propose a novel concurrency-aware cache management framework that outperforms state-of-the-art locality-only cache management schemes. First, we investigate the merit of data access concurrency and pinpoint that reducing the miss rate may not necessarily lead to better overall performance. Next, we introduce the pure miss contribution (PMC) metric, a lightweight and versatile concurrency-aware indicator, to accurately measure the cost of each outstanding miss access by considering data concurrency. Then, we present CARE, a dynamic adjustable, concurrency-aware, low-overhead cache management framework with the help of the PMC metric. We evaluate CARE with extensive experiments across different application domains and show significant performance gains with the consideration of data concurrency. In a 4-core system, CARE improves IPC by 10.3% over LRU replacement. In 8 and 16-core systems where more concurrent data accesses exist, CARE outperforms LRU by 13.0% and 17.1%, respectively.
引用
收藏
页码:1208 / 1220
页数:13
相关论文
共 53 条
  • [11] Chaudhuri Mainak, 2009, Proceedings of the 2009 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2009), P401, DOI 10.1145/1669112.1669164
  • [12] Microarchitecture optimizations for exploiting memory-level parallelism
    Chou, Y
    Fahs, B
    Abraham, S
    [J]. 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, PROCEEDINGS, 2004, : 76 - 87
  • [13] Improving Cache Management Policies Using Dynamic Reuse Distances
    Duong, Nam
    Zhao, Dali
    Kim, Taesu
    Cammarota, Rosario
    Valero, Mateo
    Veidenbaum, Alexander V.
    [J]. 2012 IEEE/ACM 45TH INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO-45), 2012, : 389 - 400
  • [14] Domain-Specialized Cache Management for Graph Analytics
    Faldu, Priyank
    Diamond, Jeff
    Grot, Boris
    [J]. 2020 IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2020), 2020, : 234 - 248
  • [15] Gao H., 2010, JWAC 2010 - 1st JILP Worshop on Computer Architecture Competitions: cache replacement Championship
  • [16] Glew A., 1998, ASPLOS WACI
  • [17] Rethinking Belady's Algorithm to Accommodate Prefetching
    Jain, Akanksha
    Lin, Calvin
    [J]. 2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 110 - 123
  • [18] Back to the Future: Leveraging Belady's Algorithm for Improved Cache Replacement
    Jain, Akanksha
    Lin, Calvin
    [J]. 2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, : 78 - 89
  • [19] High Performance Cache Replacement Using Re-Reference Interval Prediction (RRIP)
    Jaleel, Aamer
    Theobald, Kevin B.
    Steely, Simon C., Jr.
    Emer, Joel
    [J]. ISCA 2010: THE 37TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, 2010, : 60 - 71
  • [20] Multiperspective Reuse Prediction
    Jimenez, Daniel A.
    Teran, Elvira
    [J]. 50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2017, : 436 - 448