Bandwidth-Effective DRAM Cache for GPUs with Storage-Class Memory

被引:1
|
作者
Hong, Jeongmin [1 ]
Cho, Sungjun [1 ]
Park, Geonwoo [1 ]
Yang, Wonhyuk [1 ]
Gong, Young-Ho [2 ]
Kim, Gwangsun [1 ]
机构
[1] POSTECH, Dept Comp Sci & Engn, Pohang Si, South Korea
[2] Soongsil Univ, Sch Software, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
PHASE-CHANGE MEMORY; HIGH-PERFORMANCE; MAIN MEMORY; ARCHITECTURE; EFFICIENT; SYSTEM;
D O I
10.1109/HPCA57654.2024.00021
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We propose overcoming the memory capacity limitation of GPUs with high-capacity Storage-Class Memory (SCM) and DRAM cache. By significantly increasing the memory capacity with SCM, the GPU can capture a larger fraction of the memory footprint than HBM for workloads that mandate memory oversubscription, resulting in substantial speedups. However, the DRAM cache needs to be carefully designed to address the latency and bandwidth limitations of the SCM while minimizing cost overhead and considering GPU's characteristics. Because the massive number of GPU threads can easily thrash the DRAM cache and degrade performance, we first propose an SCM-aware DRAM cache bypass policy for GPUs that considers the multidimensional characteristics of memory accesses by GPUs with SCM to bypass DRAM for data with low performance utility. In addition, to reduce DRAM cache probe traffic and increase effective DRAM BW with minimal cost overhead, we propose a Configurable Tag Cache (CTC) that repurposes part of the L2 cache to cache DRAM cacheline tags. The L2 capacity used for the CTC can be adjusted by users for adaptability. Furthermore, to minimize DRAM cache probe traffic from CTC misses, our Aggregated Metadata-In-Last-column (AMIL) DRAM cache organization co-locates all DRAM cacheline tags in a single column within a row. The AMIL also retains the full ECC protection, unlike prior DRAM cache implementation with Tag-And-Data (TAD) organization. Additionally, we propose SCM throttling to curtail power consumption and exploiting SCM's SLC/MLC modes to adapt to workload's memory footprint. While our techniques can be used for different DRAM and SCM devices, we focus on a Heterogeneous Memory Stack (HMS) organization that stacks SCM dies on top of DRAM dies for high performance. Compared to HBM, the HMS improves performance by up to 12.5x (2.9x overall) and reduces energy by up to 89.3% (48.1% overall). Compared to prior works, we reduce DRAM cache probe and SCM write traffic by 91-93% and 57-75%, respectively.
引用
收藏
页码:139 / 155
页数:17
相关论文
共 31 条
  • [31] A 1.2V 8Gb 8-Channel 128GB/s High-Bandwidth Memory (HBM) Stacked DRAM with Effective Microbump I/O Test Methods Using 29nm Process and TSV
    Lee, Dong Uk
    Kim, Kyung Whan
    Kim, Kwan Weon
    Kim, Hongjung
    Kim, Ju Young
    Park, Young Jun
    Kim, Jae Hwan
    Kim, Dae Suk
    Park, Heat Bit
    Shin, Jin Wook
    Cho, Jang Hwan
    Kwon, Ki Hun
    Kim, Min Jeong
    Lee, Jaejin
    Park, Kun Woo
    Chung, Byongtae
    Hong, Sungjoo
    2014 IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE DIGEST OF TECHNICAL PAPERS (ISSCC), 2014, 57 : 432 - +