Flash-Based Computing-in-Memory Architecture to Implement High-Precision Sparse Coding

被引:0
作者
Qi, Yueran [1 ]
Feng, Yang [1 ]
Wang, Hai [1 ]
Wang, Chengcheng [1 ]
Bai, Maoying [1 ]
Liu, Jing [2 ]
Zhan, Xuepeng [1 ]
Wu, Jixuan [1 ]
Wang, Qianwen [1 ]
Chen, Jiezhi [1 ]
机构
[1] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[2] Chinese Acad Sci, Key Lab Microelect Devices & Integrated Technol, Inst Microelect, Beijing 100029, Peoples R China
基金
中国国家自然科学基金;
关键词
computing in memory; sparse coding; image reconstruction; online training; flash memory;
D O I
10.3390/mi14122190
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
To address the concerns with power consumption and processing efficiency in big-size data processing, sparse coding in computing-in-memory (CIM) architectures is gaining much more attention. Here, a novel Flash-based CIM architecture is proposed to implement large-scale sparse coding, wherein various matrix weight training algorithms are verified. Then, with further optimizations of mapping methods and initialization conditions, the variation-sensitive training (VST) algorithm is designed to enhance the processing efficiency and accuracy of the applications of image reconstructions. Based on the comprehensive characterizations observed when considering the impacts of array variations, the experiment demonstrated that the trained dictionary could successfully reconstruct the images in a 55 nm flash memory array based on the proposed architecture, irrespective of current variations. The results indicate the feasibility of using Flash-based CIM architectures to implement high-precision sparse coding in a wide range of applications.
引用
收藏
页数:13
相关论文
共 30 条
  • [1] EvAn: Neuromorphic Event-Based Sparse Anomaly Detection
    Annamalai, Lakshmi
    Chakraborty, Anirban
    Thakur, Chetan Singh
    [J]. FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [2] [Anonymous], 2015, Sparse coding and its applications in computer vision
  • [3] ADAPTIVE APPROACH FOR SPARSE REPRESENTATIONS USING THE LOCALLY COMPETITIVE ALGORITHM FOR AUDIO
    Bahadi, Soufiyan
    Rouat, Jean
    Plourde, Eric
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [4] A fully integrated reprogrammable memristor-CMOS system for efficient multiply-accumulate operations
    Cai, Fuxi
    Correll, Justin M.
    Lee, Seung Hwan
    Lim, Yong
    Bothra, Vishishtha
    Zhang, Zhengya
    Flynn, Michael P.
    Lu, Wei D.
    [J]. NATURE ELECTRONICS, 2019, 2 (07) : 290 - 299
  • [5] High-to-Low Flippling (HLF) Coding Strategy in Triple-levell-cell (TLC) 3D NAND Flash Memory to Construct Reliable Image Storages
    Chen, Binglu
    Kong, Yachen
    Sun, Zhaohui
    Fang, Xiaotong
    Zhan, Xuepeng
    Chen, Jiezhi
    [J]. 6TH IEEE ELECTRON DEVICES TECHNOLOGY AND MANUFACTURING CONFERENCE (EDTM 2022), 2022, : 336 - 338
  • [6] Dong ZK, 2018, CHIN CONTR CONF, P8132, DOI 10.23919/ChiCC.2018.8484073
  • [7] Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System
    Fair, Kaitlin L.
    Mendat, Daniel R.
    Andreou, Andreas G.
    Rozell, Christopher J.
    Romberg, Justin
    Anderson, David, V
    [J]. FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [8] Feng Y., 2023, J. Semicond, V45, P1
  • [9] A Novel Array Programming Scheme for Large Matrix Processing in Flash-Based Computing-in-Memory (CIM) With Ultrahigh Bit Density
    Feng, Yang
    Zhang, Dong
    Zhao, Guoqing
    Sun, Zhaohui
    Bai, Maoying
    Qi, Yueran
    Gong, Xiao
    Liu, Jing
    Zhang, Junyu
    Wu, Jixuan
    Zhan, Xuepeng
    Chen, Jiezhi
    [J]. IEEE TRANSACTIONS ON ELECTRON DEVICES, 2023, 70 (02) : 461 - 467
  • [10] Guo X., 2017, 2017 IEEE International Electron Devices Meeting (IEDM), DOI [DOI 10.1109/IEDM.2017.8268341, 10.1109/CISP-BMEI.2017.8301926]