Near-Memory Processing Offload to Remote (Persistent) Memory

被引:0
作者
Kisous, Roei [1 ]
Golander, Amit [2 ]
Korman, Yigal [2 ]
Gubner, Tim [2 ]
Humborstad, Rune [2 ]
Lu, Manyi [2 ]
机构
[1] Huawei Cloud, Tel Aviv, Israel
[2] Huawei Cloud, Huawei, Norway
来源
PROCEEDINGS OF THE 16TH ACM INTERNATIONAL SYSTEMS AND STORAGE CONFERENCE, SYSTOR 2023 | 2023年
关键词
Compute Offload; PM; SCM; Distributed Systems;
D O I
10.1145/3579370.3594745
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional Von Neumann computing architectures are struggling to keep up with the rapidly growing demand for scale, performance, power-efficiency and memory capacity. One promising approach to this challenge is Remote Memory, in which the memory is over RDMA fabric [1]. We enhance the remote memory architecture with Near Memory Processing (NMP), a capability that offloads particular compute tasks from the client to the server side as illustrated in Figure 1. Similar motivation drove IBM to offload object processing to their remote KV storage [2]. [GRAPHICS] . NMP offload adds latency and server resource costs, therefore, it should only be used when the offload value is substantial, specifically, to save: network bandwidth (e.g. Filter/Aggregate), round trip time (e.g. tree Lookup) and/or distributed locks (e.g. Append to a shared journal).
引用
收藏
页码:136 / 136
页数:1
相关论文
共 2 条
  • [1] Golander A., 2022, 15 INT C SYSTEMS STO
  • [2] Non-Volatile Memory Accelerated Geometric Multi-Scale Resolution Analysis
    Wood, Andrew
    Hershcovitch, Moshik
    Waddington, Daniel
    Cohen, Sarel
    Wolf, Meredith
    Suh, Hongjun
    Zong, Weiyu
    Chin, Peter
    [J]. 2021 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2021,