Symmetric and Energy-Efficient Conductance Update in Ferroelectric Tunnel Junction for Neural Network Computing

被引:2
作者
Guan, Zeyu [1 ,2 ]
Wang, Zijian [1 ,2 ]
Shen, Shengchun [1 ,2 ]
Yin, Yuewei [1 ,2 ]
Li, Xiaoguang [1 ,2 ,3 ]
机构
[1] Univ Sci & Technol China, Hefei Natl Res Ctr Phys Sci Microscale, CAS Key Lab Strongly Coupled Quantum Matter Phys, Dept Phys, Hefei 230026, Peoples R China
[2] Univ Sci & Technol China, CAS Key Lab Strongly Coupled Quantum Matter Phys, Hefei 230026, Peoples R China
[3] Nanjing Univ, Collaborat Innovat Ctr Adv Microstruct, Nanjing 210093, Peoples R China
来源
ADVANCED MATERIALS TECHNOLOGIES | 2024年 / 9卷 / 19期
基金
中国国家自然科学基金;
关键词
artificial synapses; ferroelectric tunnel junctions; neural network computing; MEMRISTOR; DEVICES;
D O I
10.1002/admt.202302238
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The rapid development of artificial intelligence requires synaptic devices with controllable conductance updates and low power consumption. Currently, conductance updates based on identical voltage pulse scheme (IVPS) and nonidentical voltage pulse scheme (NIVPS) face drawbacks in terms of recognition accuracy and energy efficiency, respectively. In this study, a mixed voltage pulse scheme (MVPS) for tuning conductance is proposed to simultaneously achieve high recognition accuracy and high energy efficiency, and its superiority is experimentally verified based on high-performance Au (or Ag)/PbZr0.52Ti0.48O3/Nb:SrTiO3 ferroelectric tunnel junction (FTJ) synaptic devices. The MVPS-based neural network simulation achieves a high recognition accuracy of approximate to 92% on the CIFAR10 dataset with better energy efficiency and throughput than those of NIVPS. In addition, high-precision experimental vector-matrix multiplication (with a relative error of approximate to 1.5%) is obtained, and the simulated FTJ synaptic arrays achieved a high inference energy efficiency of approximate to 85 TOPS W-1 and a throughput of approximate to 200 TOPS, which meets the requirements of artificial intelligence in low-power scenarios. This study provides a possible solution for practical applications of FTJ in neural network computing. A mixed voltage pulse scheme is proposed to achieve symmetric and energy-efficient conductance updates. Using this scheme with a high-performance Au/PbZr0.52Ti0.48O3/Nb:SrTiO3 ferroelectric tunnel junction (FTJ), simulated artificial intelligence tasks reveal an impressive training accuracy-energy ratio (approximate to 1.23% J-1), along with a high inference energy efficiency (approximate to 85 TOPS W-1), which lays the groundwork for incorporating FTJs into neural network computing applications. image
引用
收藏
页数:7
相关论文
共 48 条
[1]   BEOL-Compatible Superlattice FEFET Analog Synapse With Improved Linearity and Symmetry of Weight Update [J].
Aabrar, Khandker Akif ;
Kirtania, Sharadindu Gopal ;
Liang, Fu-Xiang ;
Gomez, Jorge ;
San Jose, Matthew ;
Luo, Yandong ;
Ye, Huacheng ;
Dutta, Sourav ;
Ravikumar, Priyankka G. ;
Ravindran, Prasanna Venkatesan ;
Khan, Asif Islam ;
Yu, Shimeng ;
Datta, Suman .
IEEE TRANSACTIONS ON ELECTRON DEVICES, 2022, 69 (04) :2094-2100
[2]   Low-power linear computation using nonlinear ferroelectric tunnel junction memristors [J].
Berdan, Radu ;
Marukame, Takao ;
Ota, Kensuke ;
Yamaguchi, Marina ;
Saitoh, Masumi ;
Fujii, Shosuke ;
Deguchi, Jun ;
Nishi, Yoshifumi .
NATURE ELECTRONICS, 2020, 3 (05) :259-266
[3]   A fully integrated reprogrammable memristor-CMOS system for efficient multiply-accumulate operations [J].
Cai, Fuxi ;
Correll, Justin M. ;
Lee, Seung Hwan ;
Lim, Yong ;
Bothra, Vishishtha ;
Zhang, Zhengya ;
Flynn, Michael P. ;
Lu, Wei D. .
NATURE ELECTRONICS, 2019, 2 (07) :290-299
[4]   Tuning a Binary Ferromagnet into a Multistate Synapse with Spin-Orbit-Torque-Induced Plasticity [J].
Cao, Yi ;
Rushforth, Andrew W. ;
Sheng, Yu ;
Zheng, Houzhi ;
Wang, Kaiyou .
ADVANCED FUNCTIONAL MATERIALS, 2019, 29 (25)
[5]  
Chung W., 2018, TECHNICAL DIGESTINT
[6]   Memristor-based analogue computing for brain-inspired sound localization with in situ training [J].
Gao, Bin ;
Zhou, Ying ;
Zhang, Qingtian ;
Zhang, Shuanglin ;
Yao, Peng ;
Xi, Yue ;
Liu, Qi ;
Zhao, Meiran ;
Zhang, Wenqiang ;
Liu, Zhengwu ;
Li, Xinyi ;
Tang, Jianshi ;
Qian, He ;
Wu, Huaqiang .
NATURE COMMUNICATIONS, 2022, 13 (01)
[7]   Algorithm for Training Neural Networks on Resistive Device Arrays [J].
Gokmen, Tayfun ;
Haensch, Wilfried .
FRONTIERS IN NEUROSCIENCE, 2020, 14
[8]   Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine [J].
Hu, Miao ;
Graves, Catherine E. ;
Li, Can ;
Li, Yunning ;
Ge, Ning ;
Montgomery, Eric ;
Davila, Noraica ;
Jiang, Hao ;
Williams, R. Stanley ;
Yang, J. Joshua ;
Xia, Qiangfei ;
Strachan, John Paul .
ADVANCED MATERIALS, 2018, 30 (09)
[9]  
Huang SS, 2020, DES AUT TEST EUROPE, P1025, DOI [10.23919/DATE48585.2020.9116215, 10.23919/date48585.2020.9116215]
[10]   A crossbar array of magnetoresistive memory devices for in-memory computing [J].
Jung, Seungchul ;
Lee, Hyungwoo ;
Myung, Sungmeen ;
Kim, Hyunsoo ;
Yoon, Seung Keun ;
Kwon, Soon-Wan ;
Ju, Yongmin ;
Kim, Minje ;
Yi, Wooseok ;
Han, Shinhee ;
Kwon, Baeseong ;
Seo, Boyoung ;
Lee, Kilho ;
Koh, Gwan-Hyeob ;
Lee, Kangho ;
Song, Yoonjong ;
Choi, Changkyu ;
Ham, Donhee ;
Kim, Sang Joon .
NATURE, 2022, 601 (7892) :211-+