Neural Synaptic Plasticity-Inspired Computing: A High Computing Efficient Deep Convolutional Neural Network Accelerator

被引:18
|
作者
Xia, Zihan [1 ]
Chen, Jienan [1 ]
Huang, Qiu [1 ]
Luo, Jinting [1 ]
Hu, Jianhao [1 ]
机构
[1] Univ Elect Sci & Technol China, Natl Key Lab Sci & Technol Commun, Chengdu 611731, Peoples R China
关键词
Deep convolutional neural networks; deep learning; neural synaptic plasticity; stochastic computing; high efficient accelerators; TERM PLASTICITY; ARCHITECTURE; DEVICE;
D O I
10.1109/TCSI.2020.3039346
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance in classification, natural language processing (NLP), and regression tasks. However, there is still a great gap between DCNNs and the human brain in terms of computation efficiency. Inspired by neural synaptic plasticity and stochastic computing (SC), we propose neural synaptic plasticity-inspired computing (NSPC) to simulate the human brain's neural network activity for inference tasks with simple logic gates. The multiplication and accumulation (MAC) is transformed by the wire connectivity in NSPC, which only requires bundles of wires and small width adders. To this end, the NSPC imitates the structure of neural synaptic plasticity from a circuit wires connection perspective. Furthermore, from the principle of NSPC, we use a data mapping method to convert the convolution operations to matrix multiplications. Based on the methodology of NSPC, fully-pipelined and low latency architecture is designed. The proposed NSPC accelerator exhibits high hardware efficiency while maintaining a comparable network accuracy level. The NSPC based DCNN accelerator (NSPC-CNN) processes DCNN at 1.5625M images/s with a power dissipation of 15.42 W and an area of 36.4 mm(2). The NSPC based deep neural network (DNN) accelerator (NSPC-DNN) that implements three fully connected layers DNN consumes only 6.6 mm(2) area and 2.93 W power, and achieves a throughput of 400M images/s. Compared with conventional fixed-point implementations, the NSPC-CNN achieves 2.77x area efficiency, 2.25x power efficiency; the proposed NSPC-DNN exhibits 2.31x area efficiency and 2.09x power efficiency.
引用
收藏
页码:728 / 740
页数:13
相关论文
共 50 条
  • [1] Reconfigurable Neural Synaptic Plasticity-Based Stochastic Deep Neural Network Computing
    Xia, Zihan
    Dong, Ya
    Chen, Jienan
    Wan, Rui
    Li, Shuai
    Wu, Tingyong
    2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 229 - 234
  • [2] A hardware-efficient computing engine for FPGA-based deep convolutional neural network accelerator
    Li, Xueming
    Huang, Hongmin
    Chen, Taosheng
    Gao, Huaien
    Hu, Xianghong
    Xiong, Xiaoming
    MICROELECTRONICS JOURNAL, 2022, 128
  • [3] Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access
    Kim, Minkyu
    Seo, Jae-sun
    2020 IEEE CUSTOM INTEGRATED CIRCUITS CONFERENCE (CICC), 2020,
  • [4] A Computing Efficient Hardware Architecture for Sparse Deep Neural Network Computing
    Zhang, Yanwen
    Ouyang, Peng
    Yin, Shouyi
    Zhang, Youguang
    Zhao, Weisheng
    Wei, Shaojun
    2018 14TH IEEE INTERNATIONAL CONFERENCE ON SOLID-STATE AND INTEGRATED CIRCUIT TECHNOLOGY (ICSICT), 2018, : 1261 - 1263
  • [5] An Energy-Efficient Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access
    Kim, Minkyu
    Seo, Jae-Sun
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2021, 56 (03) : 803 - 813
  • [6] An efficient stochastic computing based deep neural network accelerator with optimized activation functions
    Bodiwala S.
    Nanavati N.
    International Journal of Information Technology, 2021, 13 (3) : 1179 - 1192
  • [7] High Speed and Energy Efficient Deep Neural Network for Edge Computing
    Bai, Kangjun
    Liu, Shiya
    Yi, Yang
    SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 347 - 349
  • [8] Accelerating Deep Convolutional Neural Network base on stochastic computing
    Sadi, Mohamad Hasani
    Mahani, Ali
    INTEGRATION-THE VLSI JOURNAL, 2021, 76 : 113 - 121
  • [9] An Energy-Efficient and Flexible Accelerator based on Reconfigurable Computing for Multiple Deep Convolutional Neural Networks
    Yang, Chen
    Zhang, HaiBo
    Wang, XiaoLi
    Geng, Li
    2018 14TH IEEE INTERNATIONAL CONFERENCE ON SOLID-STATE AND INTEGRATED CIRCUIT TECHNOLOGY (ICSICT), 2018, : 1389 - 1391
  • [10] Developmental Plasticity-Inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks
    Han, Bing
    Zhao, Feifei
    Zeng, Yi
    Shen, Guobin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (01) : 240 - 251