LHC: A Low-power Heterogeneous Computing Method on Neural Network Accelerator

被引:0
|
作者
Liu, Fangxin [1 ]
Xie, Kunpeng [1 ]
Gong, Cheng [1 ]
Liu, Shusheng [1 ]
Lu, Ye [1 ]
Li, Tao [1 ]
机构
[1] Nankai Univ, Coll Comp Sci, Intelligent Comp Syst Lab, Tianjin, Peoples R China
关键词
Heterogeneous Computing; Neural-Network Accelerator; Low-power Computing; Computational Task Assignment; Computing Adaptation;
D O I
10.1109/ICPADS47876.2019.00053
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Accelerators can achieve high performance and low energy consumption in training or inference of neural networks. If the Non-Neural Network (Non-NN) algorithms with large amount of computation could make full use of the accelerators, it is possible to speed up its implementation, reduce energy consumption, and achieve load balancing, especially on mobile devices equipped with accelerators. However, accelerators are dedicated to neural network calculations, so that other Non-NN algorithms have difficulty in using their advantages. Furthermore, many hardware-specific restrictions have become the obstacles, such as constrained precision of operands and limited computation scale. In this paper, we propose a method named Low-power Heterogeneous Computing (LHC) to bridge the gap between Non-NN algorithms and NN accelerators. Firstly, we analyze the general principle of the accelerator and reveal the calculation model of the accelerator. To hide the details of the underlying neural network library, we extract some operators from the limited number of types of neural network computation they support. We encapsulate the low-level library, extract operators suitable for general algorithms, and implement some more advanced operators that can adapt to the constrained hardware conditions. These operators could facilitate programmers to implement some Non-NN algorithms. In the aspect of the algorithm, we extract the computationally intensive parts of the Non-NN algorithm and deploy these computational tasks on the accelerator by calling the operators. To verify our method, we implement three Non-NN algorithms by using operators and adjusting these algorithms, include Grid-based Motions Statistics, k-Nearest Neighbors, and k-Means, on a specific accelerator, Cambricon-1A. The experimental results show that the energy consumption of calculation is reduced by up to 5.4x, compared with the CPU baseline. Our method can be further applied to other similar accelerators.
引用
收藏
页码:326 / 334
页数:9
相关论文
共 50 条
  • [31] Exploiting Neural-Network Statistics for Low-Power DNN Inference
    Bamberg, Lennart
    Najafi, Ardalan
    Garcia-Ortiz, Alberto
    IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2024, 5 : 178 - 188
  • [32] Low-Power Self-Rectifying Memristive Artificial Neural Network for Near Internet-of-Things Sensor Computing
    Choi, Seok
    Kim, Yong
    Van Nguyen, Tien
    Jeong, Won Hee
    Min, Kyeong-Sik
    Choi, Byung Joon
    ADVANCED ELECTRONIC MATERIALS, 2021, 7 (06)
  • [33] Quantised Neural Network Accelerators for Low-Power IDS in Automotive Networks
    Khandelwal, Shashwat
    Walsh, Anneliese
    Shreejith, Shanker
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [34] Low-Power Convolutional Recurrent Neural Network For Monaural Speech Enhancement
    Gao, Fei
    Guan, Haixin
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 559 - 563
  • [35] All-Optically Controlled Artificial Synapse Based on Full Oxides for Low-Power Visible Neural Network Computing
    Yang, Ruqi
    Wang, Yue
    Li, Siqin
    Hu, Dunan
    Chen, Qiujiang
    Zhuge, Fei
    Ye, Zhizhen
    Pi, Xiaodong
    Lu, Jianguo
    ADVANCED FUNCTIONAL MATERIALS, 2024, 34 (10)
  • [36] Heterogeneous Multi-Chiplets Neural Network Accelerator
    Zhu G.
    Ma S.
    Zhang C.
    Wang B.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (05): : 811 - 818
  • [37] Design of FPGA-Based Accelerator for Convolutional Neural Network under Heterogeneous Computing Framework with OpenCL
    Luo, Li
    Wu, Yakun
    Qiao, Fei
    Yang, Yi
    Wei, Qi
    Zhou, Xiaobo
    Fan, Yongkai
    Xu, Shuzheng
    Liu, Xinjun
    Yang, Huazhong
    INTERNATIONAL JOURNAL OF RECONFIGURABLE COMPUTING, 2018, 2018
  • [38] LOCATOR: Low-power ORB accelerator for autonomous cars
    Taranco, Raul
    Arnau, Jose-Maria
    Gonzalez, Antonio
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2023, 174 : 32 - 45
  • [39] Low-Power Manycore Accelerator for Personalized Biomedical Applications
    Page, Adam
    Attaran, Nasrin
    Shea, Colin
    Homayoun, Houman
    Mohsenin, Tinoosh
    2016 INTERNATIONAL GREAT LAKES SYMPOSIUM ON VLSI (GLSVLSI), 2016, : 63 - 68
  • [40] A 3D Tiled Low Power Accelerator for Convolutional Neural Network
    Huan, Yuxiang
    Xu, Jiawei
    Zheng, Lirong
    Tenhunen, Hannu
    Zou, Zhuo
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,