Neural Synaptic Plasticity-Inspired Computing: A High Computing Efficient Deep Convolutional Neural Network Accelerator

被引:18
|
作者
Xia, Zihan [1 ]
Chen, Jienan [1 ]
Huang, Qiu [1 ]
Luo, Jinting [1 ]
Hu, Jianhao [1 ]
机构
[1] Univ Elect Sci & Technol China, Natl Key Lab Sci & Technol Commun, Chengdu 611731, Peoples R China
关键词
Deep convolutional neural networks; deep learning; neural synaptic plasticity; stochastic computing; high efficient accelerators; TERM PLASTICITY; ARCHITECTURE; DEVICE;
D O I
10.1109/TCSI.2020.3039346
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance in classification, natural language processing (NLP), and regression tasks. However, there is still a great gap between DCNNs and the human brain in terms of computation efficiency. Inspired by neural synaptic plasticity and stochastic computing (SC), we propose neural synaptic plasticity-inspired computing (NSPC) to simulate the human brain's neural network activity for inference tasks with simple logic gates. The multiplication and accumulation (MAC) is transformed by the wire connectivity in NSPC, which only requires bundles of wires and small width adders. To this end, the NSPC imitates the structure of neural synaptic plasticity from a circuit wires connection perspective. Furthermore, from the principle of NSPC, we use a data mapping method to convert the convolution operations to matrix multiplications. Based on the methodology of NSPC, fully-pipelined and low latency architecture is designed. The proposed NSPC accelerator exhibits high hardware efficiency while maintaining a comparable network accuracy level. The NSPC based DCNN accelerator (NSPC-CNN) processes DCNN at 1.5625M images/s with a power dissipation of 15.42 W and an area of 36.4 mm(2). The NSPC based deep neural network (DNN) accelerator (NSPC-DNN) that implements three fully connected layers DNN consumes only 6.6 mm(2) area and 2.93 W power, and achieves a throughput of 400M images/s. Compared with conventional fixed-point implementations, the NSPC-CNN achieves 2.77x area efficiency, 2.25x power efficiency; the proposed NSPC-DNN exhibits 2.31x area efficiency and 2.09x power efficiency.
引用
收藏
页码:728 / 740
页数:13
相关论文
共 50 条
  • [31] Fast and Efficient Convolutional Accelerator for Edge Computing
    Ardakani, Arash
    Condo, Carlo
    Gross, Warren J.
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (01) : 138 - 152
  • [32] Stable and efficient resource management using deep neural network on cloud computing
    Jeong, Byeonghui
    Baek, Seungyeon
    Park, Sihyun
    Jeon, Jueun
    Jeong, Young-Sik
    NEUROCOMPUTING, 2023, 521 : 99 - 112
  • [33] A Dynamic Deep Neural Network Design for Efficient Workload Allocation in Edge Computing
    Lo, Chi
    Su, Yu-Yi
    Lee, Chun-Yi
    Chang, Shih-Chieh
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 273 - 280
  • [34] Accurate and Efficient Stochastic Computing Hardware for Convolutional Neural Networks
    Yu, Joonsang
    Kim, Kyounghoon
    Lee, Jongeun
    Choi, Kiyoung
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 105 - 112
  • [35] Exploiting Approximate Computing for Efficient and Reliable Convolutional Neural Networks
    Bosio, Alberto
    Deveautour, Bastien
    O'Connor, Ian
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 326 - 326
  • [36] FPAP: A Folded Architecture for Efficient Computing of Convolutional Neural Networks
    Wang, Yizhi
    Lin, Jun
    Wang, Zhongfeng
    2018 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2018, : 503 - 508
  • [37] Design of FPGA-Based Accelerator for Convolutional Neural Network under Heterogeneous Computing Framework with OpenCL
    Luo, Li
    Wu, Yakun
    Qiao, Fei
    Yang, Yi
    Wei, Qi
    Zhou, Xiaobo
    Fan, Yongkai
    Xu, Shuzheng
    Liu, Xinjun
    Yang, Huazhong
    INTERNATIONAL JOURNAL OF RECONFIGURABLE COMPUTING, 2018, 2018
  • [38] A Spiking Deep Convolutional Neural Network Based on Efficient Spike Timing Dependent Plasticity
    Zhou, Xueqian
    Song, Zeyang
    Wu, Xi
    Yan, Rui
    2020 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA (ICAIBD 2020), 2020, : 39 - 45
  • [39] VWA: Hardware Efficient Vectorwise Accelerator for Convolutional Neural Network
    Chang, Kuo-Wei
    Chang, Tian-Sheuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (01) : 145 - 154
  • [40] FPGA Implementation of Convolutional Neural Network Based on Stochastic Computing
    Kim, Daewoo
    Moghaddam, Mansureh S.
    Moradian, Hossein
    Sim, Hyeonuk
    Lee, Jongeun
    Choi, Kiyoung
    2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, : 287 - 290