A High Efficient Architecture for Convolution Neural Network Accelerator

被引:3
|
作者
Kong Anmin [1 ]
Zhao Bin [1 ]
机构
[1] ASTAR, Inst Microelect, Singapore, Singapore
关键词
convolutional neural networks (CNNs); deep learning; energy-efficient accelerators;
D O I
10.1109/ICoIAS.2019.00029
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks (CNNs) are widely used in modern Artificial Intelligence (AI) systems. Compared with other classical methods, CNNs have superior performance in image classification, speech recognition and object detection. However the computational load of CNNs is very heavy and a large amount of data movement are expected. An efficient way of data movement is critical for both performance and power efficiency for an accelerator design. In this paper we propose a novel CNN accelerator architecture with unique parallel loading scheme and smart memory addressing solution. Our solution is 30% faster than others [1] on Alexnet. Our proposal can achieve high efficiency for FC layer without using image batching. This will make our solution very suitable for edge applications.
引用
收藏
页码:131 / 134
页数:4
相关论文
共 50 条
  • [1] A New Accelerator for Convolution Neural Network
    Wu, Fan
    Song, Jie
    Zhuang, Haoran
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7982 - 7985
  • [2] A sparse convolution neural network accelerator with bandwidth-efficient data loopback structure
    Yuan, Haiying
    Zeng, Zhiyong
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 98
  • [3] Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing
    Kim, Suchang
    Jo, Jihyuck
    Park, In-Cheol
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (05) : 2017 - 2029
  • [4] Arithmetic Precision Reconfigurable Convolution Neural Network Accelerator
    Shen, En Ho
    Klopp, Jan P.
    Chien, Shao-Yi
    2020 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2020, : 129 - 134
  • [5] VSCNN: Convolution Neural Network Accelerator With Vector Sparsity
    Chang, Kuo-Wei
    Chang, Tian-Sheuan
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,
  • [6] GRIP: A Graph Neural Network Accelerator Architecture
    Kiningham, Kevin
    Levis, Philip
    Re, Christopher
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (04) : 914 - 925
  • [7] Hardware Flexible Systolic Architecture for Convolution Accelerator in Convolutional Neural Networks
    Aguirre-Alvarez, Paulo Aaron
    Diaz-Carmona, Javier
    Arredondo-Velazquez, Moises
    2022 45TH INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING, TSP, 2022, : 305 - 309
  • [8] High-Performance Winograd Based Accelerator Architecture for Convolutional Neural Network
    Vardhana, M.
    Pinto, Rohan
    IEEE COMPUTER ARCHITECTURE LETTERS, 2025, 24 (01) : 21 - 24
  • [9] Learning an Efficient Convolution Neural Network for Pansharpening
    Guo, Yecai
    Ye, Fei
    Gong, Hao
    ALGORITHMS, 2019, 12 (01)
  • [10] Efficient Convolution Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP), 2016,