NAND-SPIN-based processing-in-MRAM architecture for convolutional neural network acceleration

被引:0
作者
Yinglin Zhao
Jianlei Yang
Bing Li
Xingzhou Cheng
Xucheng Ye
Xueyan Wang
Xiaotao Jia
Zhaohao Wang
Youguang Zhang
Weisheng Zhao
机构
[1] Beihang University,School of Electronic and Information Engineering
[2] Beihang University,School of Computer Science and Engineering
[3] Capital Normal University,Academy for Multidisciplinary Studies
[4] Beihang University,School of Integrated Circuit Science and Engineering
[5] Beihang University,Qingdao Research Institute
来源
Science China Information Sciences | 2023年 / 66卷
关键词
processing-in-memory; convolutional neural network; NAND-like spintronics memory; nonvolatile memory; magnetic tunnel junction;
D O I
暂无
中图分类号
学科分类号
摘要
The performance and efficiency of running large-scale datasets on traditional computing systems exhibit critical bottlenecks due to the existing “power wall” and “memory wall” problems. To resolve those problems, processing-in-memory (PIM) architectures are developed to bring computation logic in or near memory to alleviate the bandwidth limitations during data transmission. NAND-like spintronics memory (NAND-SPIN) is one kind of promising magnetoresistive random-access memory (MRAM) with low write energy and high integration density, and it can be employed to perform efficient in-memory computation operations. In this study, we propose a NAND-SPIN-based PIM architecture for efficient convolutional neural network (CNN) acceleration. A straightforward data mapping scheme is exploited to improve parallelism while reducing data movements. Benefiting from the excellent characteristics of NAND-SPIN and in-memory processing architecture, experimental results show that the proposed approach can achieve ∼2.6× speedup and ∼1.4× improvement in energy efficiency over state-of-the-art PIM solutions.
引用
收藏
相关论文
共 68 条
  • [1] Hao Y(2021)Recent progress of integrated circuits and optoelectronic chips Sci China Inf Sci 64 201401-3051
  • [2] Xiang S Y(2019)Practical implementation of memristor-based threshold logic gates IEEE Trans Circ Syst I 66 3041-1406
  • [3] Han G Q(2021)NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators Sci China Inf Sci 64 160407-67
  • [4] Papandroulidakis G(2020)Evolution of phase-change memory for the storage-class memory and beyond IEEE Trans Electron Devices 67 1394-1417
  • [5] Serb A(2018)Equivalent-accuracy accelerated neural-network training using analogue memory Nature 558 60-1830
  • [6] Khiat A(2021)Spintronics for energy- efficient computing: an overview and outlook Proc IEEE 109 1398-483
  • [7] Yuan Z H(2016)Magnetoresistive random access memory Proc IEEE 104 1796-588
  • [8] Liu J Z(2017)Computing in memory with spin-transfer torque magnetic RAM IEEE Trans VLSI Syst 26 470-707
  • [9] Li X C(2018)Field-free switching of a perpendicular magnetic tunnel junction through the interplay of spin-orbit and spin-transfer torques Nat Electron 1 582-346
  • [10] Kim T(2021)Sub-ns field-free switching in perpendicular magnetic tunnel junctions by the interplay of spin transfer and orbit torques IEEE Electron Device Lett 42 704-516