Advances and Trends on On-Chip Compute-in-Memory Macros and Accelerators

被引:0
|
作者
Seo, Jae-sun [1 ,2 ]
机构
[1] Arizona State Univ, Sch ECEE, Tempe, AZ 85281 USA
[2] Meta Real Labs, Tempe, AZ 85287 USA
关键词
Compute-in-memory; AI; accelerator; ASIC;
D O I
10.1109/DAC56929.2023.10248014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conventional AI accelerators have been bottle-necked by high volumes of data movement and accesses required between memory and compute units. A transformative approach that has emerged to address this in compute-in-memory (CIM) architectures, which perform computation in-place inside the volatile or non-volatile memory in an analog or digital manner, greatly reducing the data transfers and memory accesses. This paper presents recent advances and trends on CIM macros and CIM-based accelerator designs.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] AILC: Accelerate On-Chip Incremental Learning With Compute-in-Memory Technology
    Luo, Yandong
    Yu, Shimeng
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (08) : 1225 - 1238
  • [2] Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune
    Huang, Shanshi
    Jiang, Hongwu
    Yu, Shimeng
    10TH IEEE NON-VOLATILE MEMORY SYSTEMS AND APPLICATIONS SYMPOSIUM (NVMSA 2021), 2021,
  • [3] Safe, secure and trustworthy compute-in-memory accelerators
    Wang, Ziyu
    Wu, Yuting
    Park, Yongmo
    Lu, Wei D.
    NATURE ELECTRONICS, 2024, 7 (12): : 1086 - 1097
  • [4] DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training
    Peng, Xiaochen
    Huang, Shanshi
    Jiang, Hongwu
    Lu, Anni
    Yu, Shimeng
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (11) : 2306 - 2319
  • [5] CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays
    Jiang, Hongwu
    Peng, Xiaochen
    Huang, Shanshi
    Yu, Shimeng
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (07) : 944 - 954
  • [6] On-Chip Network Design Considerations for Compute Accelerators
    Bakhoda, Ali
    Kim, John
    Aamodt, Tor M.
    PACT 2010: PROCEEDINGS OF THE NINETEENTH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, 2010, : 535 - 536
  • [7] Efficient Processing of MLPerf Mobile Workloads Using Digital Compute-In-Memory Macros
    Sun, Xiaoyu
    Cao, Weidong
    Crafton, Brian
    Akarvardar, Kerem
    Mori, Haruki
    Fujiwara, Hidehiro
    Noguchi, Hiroki
    Chih, Yu-Der
    Chang, Meng-Fan
    Wang, Yih
    Chang, Tsung-Yung Jonathan
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (04) : 1191 - 1205
  • [8] Analog-to-Digital Converter Design Exploration for Compute-in-Memory Accelerators
    Jiang, Hongwu
    Li, Wantong
    Huang, Shanshi
    Cosemans, Stefan
    Catthoor, Francky
    Yu, Shimeng
    IEEE DESIGN & TEST, 2022, 39 (02) : 48 - 55
  • [9] Hardware-aware Quantization/Mapping Strategies for Compute-in-Memory Accelerators
    Huang, Shanshi
    Jiang, Hongwu
    Yu, Shimeng
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2023, 28 (03)
  • [10] A compute-in-memory chip based on resistive random-access memory
    Weier Wan
    Rajkumar Kubendran
    Clemens Schaefer
    Sukru Burc Eryilmaz
    Wenqiang Zhang
    Dabin Wu
    Stephen Deiss
    Priyanka Raina
    He Qian
    Bin Gao
    Siddharth Joshi
    Huaqiang Wu
    H.-S. Philip Wong
    Gert Cauwenberghs
    Nature, 2022, 608 : 504 - 512