An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations

被引:0
作者
Yin Wang
Hongwei Tang
Yufeng Xie
Xinyu Chen
Shunli Ma
Zhengzong Sun
Qingqing Sun
Lin Chen
Hao Zhu
Jing Wan
Zihan Xu
David Wei Zhang
Peng Zhou
Wenzhong Bao
机构
[1] Fudan University,State Key Laboratory of ASIC and System, School of Microelectronics
[2] Shenzhen Sixcarbon Technology,undefined
来源
Nature Communications | / 12卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In-memory computing may enable multiply-accumulate (MAC) operations, which are the primary calculations used in artificial intelligence (AI). Performing MAC operations with high capacity in a small area with high energy efficiency remains a challenge. In this work, we propose a circuit architecture that integrates monolayer MoS2 transistors in a two-transistor–one-capacitor (2T-1C) configuration. In this structure, the memory portion is similar to a 1T-1C Dynamic Random Access Memory (DRAM) so that theoretically the cycling endurance and erase/write speed inherit the merits of DRAM. Besides, the ultralow leakage current of the MoS2 transistor enables the storage of multi-level voltages on the capacitor with a long retention time. The electrical characteristics of a single MoS2 transistor also allow analog computation by multiplying the drain voltage by the stored voltage on the capacitor. The sum-of-product is then obtained by converging the currents from multiple 2T-1C units. Based on our experiment results, a neural network is ex-situ trained for image recognition with 90.3% accuracy. In the future, such 2T-1C units can potentially be integrated into three-dimensional (3D) circuits with dense logic and memory layers for low power in-situ training of neural networks in hardware.
引用
收藏
相关论文
共 73 条
[1]  
Sebastian A(2020)Memory devices and applications for in-memory computing Nat. Nanotechnol. 15 529-544
[2]  
Le Gallo M(1995)Hitting the memory wall: implications of the obvious SIGARCH Comput. Arch. News 23 20-24
[3]  
Khaddam-Aljameh R(2019)Processing data where it makes sense: enabling in-memory computation Microprocessors Microsyst. 67 28-41
[4]  
Eleftheriou E(2015)Memory leads the way to better computing Nat. Nanotechnol. 10 191-194
[5]  
Wulf WA(2018)In-memory computing with resistive switching devices Nat. Electron. 1 333-343
[6]  
McKee SA(2020)Low-power linear computation using nonlinear ferroelectric tunnel junction memristors Nat. Electron. 3 259-266
[7]  
Mutlu O(2020)Fully hardware-implemented memristor convolutional neural network Nature 577 641-646
[8]  
Ghose S(2015)Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature 521 61-64
[9]  
Gómez-Luna J(2017)Sparse coding with memristor networks Nat. Nanotechnol. 12 784-789
[10]  
Ausavarungnirun R(2009)A fuzzy DEA–neural approach to measuring design service performance in PCM projects Autom. Constr. 18 702-713