SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing

被引:1
作者
Duy, Kha Dinh [1 ]
Lee, Hojoon [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Seoul 03063, South Korea
关键词
Cloud computing; Computer architecture; Memory management; Computational modeling; Hardware; Random access memory; Computational efficiency; Processor-in-memory; confidential computing;
D O I
10.1109/TCC.2022.3207145
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of the in-cloud computation in the context of hardware-based secure enclaves. However, the approach has faced challenges in achieving efficient large data computation. In this article, we present a novel design, called se-pim, that retrofits Processing-In-Memory (PIM) as a data-intensive confidential computing accelerator. PIM-accelerated computation renders large data computation highly efficient by minimizing data movement. Based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing inside memory. We construct our findings into a software-hardware co-design called se-pim. Our design illustrates the advantages of PIM-based confidential computing acceleration. We study the challenges in adapting PIM in confidential computing and propose a set of imperative changes, as well as a programming model that can utilize them. Our evaluation shows se-pim can provide a side-channel resistant secure computation offloading and run data-intensive applications with negligible performance overhead compared to the baseline PIM model.
引用
收藏
页码:2473 / 2490
页数:18
相关论文
共 74 条
[71]   I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators [J].
Wei, Lingxiao ;
Luo, Bo ;
Li, Yu ;
Liu, Yannan ;
Xu, Qiang .
34TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2018), 2018, :393-406
[72]  
Cho BY, 2020, Arxiv, DOI arXiv:2012.00158
[73]  
Yu HG, 2020, PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), P209, DOI [10.1109/host45689.2020.9300274, 10.1109/HOST45689.2020.9300274]
[74]  
Yu W., 2015, P 52 ACM EDAC IEEE D, P1