Large scale integrated IGZO crossbar memristor array based artificial neural architecture for scalable in-memory computing

被引:6
|
作者
Naqi, Muhammad [1 ]
Kim, Taehwan [3 ]
Cho, Yongin [1 ]
Pujar, Pavan [2 ]
Park, Jongsun [3 ]
Kim, Sunkook [1 ]
机构
[1] Sungkyunkwan Univ, Sch Adv Mat Sci & Engn, Suwon 16419, South Korea
[2] Indian Inst Technol IIT BHU, Dept Ceram Engn, Varanasi 221005, Uttar Pradesh, India
[3] Korea Univ, Sch Elect Engn, 136713 Seoul, South Korea
来源
MATERIALS TODAY NANO | 2024年 / 25卷
基金
新加坡国家研究基金会;
关键词
IGZO; memristor array; Artificial synapse; Neural networks; neuromorphic computing; Artificial intelligence; Spiking neural network; RRAM DEVICES; NETWORK;
D O I
10.1016/j.mtnano.2023.100441
中图分类号
TB3 [工程材料学];
学科分类号
0805 ; 080502 ;
摘要
Neuromorphic systems based on memristor arrays have not only addressed the von Neumann bottleneck issue but have also enabled the development of computing applications with high accuracy. In this study, an artificial neural architecture based on a 10 x 10 IGZO memristor array is presented to emulate synaptic dynamics for performing artificial intelligence (AI) computing with high recognition accuracy rate. The large area 10 x 10 IGZO memristor array was fabricated using the photolithography method, resulting in stable and reliable memory operations. The bipolar switching at -2 V-2.5 V, endurance of 500 cycles, retention of >10(4) s, and uniform V-set/V-reset operation of 100 devices were achieved by modulating the oxygen vacancy in the IGZO film. The emulation of electric synaptic dynamics was also observed, including potentiation-depression, multilevel long-term memory (LTM), and multilevel short-term memory (STM), revealing highly linear and stable synaptic functions at different modulated pulse settings. Additionally, electrical modeling (HSPICE) with vector-matrix measurements and simulation of various artificial neural network (ANN) algorithms, such as convolution neural network (CNN) and spiking neural network (SNN), were performed, demonstrating a linear increase in current accumulation with high recognition rates of 99.33 % and 86.46 %, respectively. This work provides a novel approach for overcoming the von Neumann bottleneck issue and emulating synaptic dynamics in various neural networks with high accuracy.
引用
收藏
页数:14
相关论文
共 6 条
  • [1] A scalable and reconfigurable in-memory architecture for ternary deep spiking neural network with ReRAM based neurons
    Lin, Jie
    Yuan, Jiann-Shiun
    NEUROCOMPUTING, 2020, 375 : 102 - 112
  • [2] Memristor-based Deep Spiking Neural Network with a Computing-In-Memory Architecture
    Nowshin, Fabiha
    Yi, Yang
    PROCEEDINGS OF THE TWENTY THIRD INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2022), 2022, : 163 - 168
  • [3] Resistive Memory-Based In-Memory Computing: From Device and Large-Scale Integration System Perspectives
    Yan, Bonan
    Li, Bing
    Qiao, Ximing
    Xue, Cheng-Xin
    Chang, Meng-Fan
    Chen, Yiran
    Li, Hai
    ADVANCED INTELLIGENT SYSTEMS, 2019, 1 (07)
  • [4] Integrated In-Memory Sensor and Computing of Artificial Vision Based on Full-vdW Optoelectronic Ferroelectric Field-Effect Transistor
    Wang, Peng
    Li, Jie
    Xue, Wuhong
    Ci, Wenjuan
    Jiang, Fengxian
    Shi, Lei
    Zhou, Feichi
    Zhou, Peng
    Xu, Xiaohong
    ADVANCED SCIENCE, 2024, 11 (03)
  • [5] Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
    Rasch, Malte J.
    Mackin, Charles
    Le Gallo, Manuel
    Chen, An
    Fasoli, Andrea
    Odermatt, Frederic
    Li, Ning
    Nandakumar, S. R.
    Narayanan, Pritish
    Tsai, Hsinyu
    Burr, Geoffrey W.
    Sebastian, Abu
    Narayanan, Vijay
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [6] Integrated in-memory sensor and computing of artificial vision system based on reversible bonding transition-induced nitrogen-doped carbon quantum dots (N-CQDs)
    Yu, Tianqi
    Li, Jie
    Lei, Wei
    Shafe, Suhaidi
    Mohtar, Mohd Nazim
    Jindapetch, Nattha
    van Dommelen, Paphavee
    Zhao, Zhiwei
    NANO RESEARCH, 2024, 17 (11) : 10049 - 10057