Processing-in-Memory Using Optically-Addressed Phase Change Memory

被引:4
|
作者
Yang, Guowei [1 ]
Demirkiran, Cansu [1 ]
Kizilates, Zeynep Ece [1 ]
Ocampo, Carlos A. Rios [2 ]
Coskun, Ayse K. [1 ]
Joshi, Ajay [1 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
[2] Univ Maryland, College Pk, MD 20742 USA
来源
2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED | 2023年
关键词
optical computing; phase change memory; processing-in-memory; deep neural networks; NEURAL-NETWORKS;
D O I
10.1109/ISLPED58423.2023.10244409
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Today's Deep Neural Network (DNN) inference systems contain hundreds of billions of parameters, resulting in significant latency and energy overheads during inference due to frequent data transfers between compute andmemory units. Processing-in-Memory (PiM) has emerged as a viable solution to tackle this problem by avoiding the expensive data movement. PiM approaches based on electrical devices suffer from throughput and energy efficiency issues. In contrast, Optically-addressed Phase Change Memory (OPCM) operates with light and achieves much higher throughput and energy efficiency compared to its electrical counterparts. This paper introduces a system-level design that takes the OPCM programming overhead into consideration, and identifies that the programming cost dominates the DNN inference on OPCM-based PiM architectures. We explore the design space of this system and identify themost energy-efficientOPCMarray size and batch size. We propose a novel thresholding and reordering technique on the weight blocks to further reduce the programming overhead. Combining these optimizations, our approach achieves up to 65.2 x higher throughput than existing photonic accelerators for practical DNN workloads.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] A Survey of Resource Management for Processing-In-Memory and Near-Memory Processing Architectures
    Khan, Kamil
    Pasricha, Sudeep
    Kim, Ryan Gary
    JOURNAL OF LOW POWER ELECTRONICS AND APPLICATIONS, 2020, 10 (04) : 1 - 31
  • [2] vPIM: Processing-in-Memory Virtualization
    Teguia, Dufy
    Chen, Jiaxuan
    Bitchebe, Stella
    Balmau, Oana
    Tchana, Alain
    PROCEEDINGS OF THE TWENTY-FIFTH ACM INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2024, 2024, : 417 - 430
  • [3] Database Processing-in-Memory: A Vision
    Kepe, Tiago R.
    Almeida, Eduardo C.
    Alves, Marco A. Z.
    Meira, Jorge A.
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, PT I, 2019, 11706 : 418 - 428
  • [4] Architecting Optically Controlled Phase Change Memory
    Narayan, Aditya
    Thonnart, Yvain
    Vivet, Pascal
    Coskun, Ayse
    Joshi, Ajay
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2022, 19 (04)
  • [5] Exploring the Processing-in-Memory design space
    Scrbak, Marko
    Islam, Mahzabeen
    Kavi, Krishna M.
    Ignatowski, Mike
    Jayasena, Nuwan
    JOURNAL OF SYSTEMS ARCHITECTURE, 2017, 75 : 59 - 67
  • [6] GP-SIMD Processing-in-Memory
    Morad, Amir
    Yavits, Leonid
    Ginosar, Ran
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2014, 11 (04)
  • [7] Reconfigurable Dataflow Graphs For Processing-In-Memory
    Shelor, Charles F.
    Kavi, Krishna M.
    ICDCN '19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING, 2019, : 110 - 119
  • [8] Adaptive Query Compilation with Processing-in-Memory
    Baumstark, Alexander
    Jibril, Muhammad Attahir
    Sattler, Kai-Uwe
    2023 IEEE 39TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING WORKSHOPS, ICDEW, 2023, : 191 - 197
  • [9] Processing-in-Memory: Exploring the Design Space
    Scrbak, Marko
    Islam, Mahzabeen
    Kavi, Krishna M.
    Ignatowski, Mike
    Jayasena, Nuwan
    ARCHITECTURE OF COMPUTING SYSTEMS - ARCS 2015, 2015, 9017 : 43 - 54
  • [10] Wave-PIM: AcceleratingWave Simulation Using Processing-in-Memory
    Hanindhito, Bagus
    Li, Ruihao
    Gourounas, Dimitrios
    Fathi, Arash
    Govil, Karan
    Trenev, Dimitar
    Gerstlauer, Andreas
    John, Lizy K.
    50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,