HNMC: Hybrid Near-Memory Computing Circuit for Neural Network Acceleration

被引:0
|
作者
Liu, Xiyan [1 ]
Liu, Qiang [1 ]
机构
[1] Tianjin Univ, Sch Microelect, Tianjin Key Lab Imaging & Sensing Microelect Techn, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
Circuits; Table lookup; Delays; Random access memory; Optimization; Xenon; Neural networks; Near-memory computing; SRAM; look-up table; multiplier; neural networks; MULTIPLICATION;
D O I
10.1109/TCSII.2024.3403830
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The state-of-the-art lookup table (LUT)-based solution usually requires large-size memory which hampers the implementation of a high speed and small area computing scheme. To minimize the area while achieving a fast speed, this brief presents a hybrid near-memory computing (HNMC) circuit with a new LUT reduction technique to accelerate the LUT-based multiplication in neural network. Experimental results show that compared to the 4-bit multipliers based on pure LUT, pure logic and near-memory computing (NMC) circuits, the multiplier based on HNMC reduces up to 75% delay, 81% area and 80% power consumption; the 8-bit convolution engine based on HNMC increases throughput by 2x and reduces area up to 77% and power consumption up to 40% compared to the state-of-the-art NMC convolution engine.
引用
收藏
页码:4763 / 4767
页数:5
相关论文
共 50 条
  • [31] TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory
    Gao, Mingyu
    Pu, Jing
    Yang, Xuan
    Horowitz, Mark
    Kozyrakis, Christos
    OPERATING SYSTEMS REVIEW, 2017, 51 (02) : 751 - 764
  • [32] TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory
    Gao, Mingyu
    Pu, Jing
    Yang, Xuan
    Horowitz, Mark
    Kozyrakis, Christos
    ACM SIGPLAN NOTICES, 2017, 52 (04) : 751 - 764
  • [33] TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory
    Gao, Mingyu
    Pu, Jing
    Yang, Xuan
    Horowitz, Mark
    Kozyrakis, Christos
    TWENTY-SECOND INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXII), 2017, : 751 - 764
  • [34] Vesti: An In-Memory Computing Processor for Deep Neural Networks Acceleration
    Jiang, Zhewei
    Yin, Shihui
    Kim, Minkyu
    Gupta, Tushar
    Seok, Mingoo
    Seo, Jae-sun
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1516 - 1521
  • [35] Neural network based hybrid computing model for wind speed prediction
    Sheela, K. Gnana
    Deepa, S. N.
    NEUROCOMPUTING, 2013, 122 : 425 - 429
  • [36] An intelligent computing technique for fluid flow problems using hybrid adaptive neural network and genetic algorithm
    El-Emam, Nameer N.
    Al-Rabeh, Riadh H.
    APPLIED SOFT COMPUTING, 2011, 11 (04) : 3283 - 3296
  • [37] Monitoring near burner slag deposition with a hybrid neural network system
    Tan, CK
    Wilcox, SJ
    Ward, J
    Lewitt, M
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2003, 14 (07) : 1137 - 1145
  • [38] A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture
    Jeong, Hoichang
    Kim, Seungbin
    Park, Keonhee
    Jung, Jueun
    Lee, Kyuho Jason
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (05) : 1739 - 1743
  • [39] An Energy-Efficient and High Throughput in-Memory Computing Bit-Cell With Excellent Robustness Under Process Variations for Binary Neural Network
    Saha, Gobinda
    Jiang, Zhewei
    Parihar, Sanjay
    Cao, Xi
    Higman, Jack
    Ul Karim, Muhammed Ahosan
    IEEE ACCESS, 2020, 8 : 91405 - 91414
  • [40] Crossbar-Aligned & Integer-Only Neural Network Compression for Efficient In-Memory Acceleration
    Huai, Shuo
    Liu, Di
    Luo, Xiangzhong
    Chen, Hui
    Liu, Weichen
    Subramaniam, Ravi
    2023 28TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC, 2023, : 234 - 239