LHC: A Low-power Heterogeneous Computing Method on Neural Network Accelerator

被引:0
|
作者
Liu, Fangxin [1 ]
Xie, Kunpeng [1 ]
Gong, Cheng [1 ]
Liu, Shusheng [1 ]
Lu, Ye [1 ]
Li, Tao [1 ]
机构
[1] Nankai Univ, Coll Comp Sci, Intelligent Comp Syst Lab, Tianjin, Peoples R China
关键词
Heterogeneous Computing; Neural-Network Accelerator; Low-power Computing; Computational Task Assignment; Computing Adaptation;
D O I
10.1109/ICPADS47876.2019.00053
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Accelerators can achieve high performance and low energy consumption in training or inference of neural networks. If the Non-Neural Network (Non-NN) algorithms with large amount of computation could make full use of the accelerators, it is possible to speed up its implementation, reduce energy consumption, and achieve load balancing, especially on mobile devices equipped with accelerators. However, accelerators are dedicated to neural network calculations, so that other Non-NN algorithms have difficulty in using their advantages. Furthermore, many hardware-specific restrictions have become the obstacles, such as constrained precision of operands and limited computation scale. In this paper, we propose a method named Low-power Heterogeneous Computing (LHC) to bridge the gap between Non-NN algorithms and NN accelerators. Firstly, we analyze the general principle of the accelerator and reveal the calculation model of the accelerator. To hide the details of the underlying neural network library, we extract some operators from the limited number of types of neural network computation they support. We encapsulate the low-level library, extract operators suitable for general algorithms, and implement some more advanced operators that can adapt to the constrained hardware conditions. These operators could facilitate programmers to implement some Non-NN algorithms. In the aspect of the algorithm, we extract the computationally intensive parts of the Non-NN algorithm and deploy these computational tasks on the accelerator by calling the operators. To verify our method, we implement three Non-NN algorithms by using operators and adjusting these algorithms, include Grid-based Motions Statistics, k-Nearest Neighbors, and k-Means, on a specific accelerator, Cambricon-1A. The experimental results show that the energy consumption of calculation is reduced by up to 5.4x, compared with the CPU baseline. Our method can be further applied to other similar accelerators.
引用
收藏
页码:326 / 334
页数:9
相关论文
共 50 条
  • [1] A Low-Power Radiation Detection SoC With Neural Network Accelerator for Radioisotope Identification
    Murray, Samuel J.
    Schmitz, Joseph A.
    Balkir, Sina
    Hoffman, Michael W.
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2023, 70 (03) : 272 - 285
  • [2] Approximate Logic Synthesis in the Loop for Designing Low-Power Neural Network Accelerator
    Qian, Yifan
    Meng, Chang
    Zhang, Yawen
    Qian, Weikang
    Wang, Runsheng
    Huang, Ru
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [3] Low-Power FPGA Implementation of Convolution Neural Network Accelerator for Pulse Waveform Classification
    Chen, Chuanglu
    Li, Zhiqiang
    Zhang, Yitao
    Zhang, Shaolong
    Hou, Jiena
    Zhang, Haiying
    ALGORITHMS, 2020, 13 (09)
  • [4] FinFET-based Low-Power Approximate Multiplier for Neural Network Hardware Accelerator
    Baraati, Faraz
    Nasab, Milad Tanavardi
    Ghaderi, Reza
    Jafari, Kian
    2022 IRANIAN INTERNATIONAL CONFERENCE ON MICROELECTRONICS, IICM, 2022, : 17 - 20
  • [5] A low-power task scheduling algorithm for heterogeneous cloud computing
    Bin Liang
    Xiaoshe Dong
    Yufei Wang
    Xingjun Zhang
    The Journal of Supercomputing, 2020, 76 : 7290 - 7314
  • [6] A low-power task scheduling algorithm for heterogeneous cloud computing
    Liang, Bin
    Dong, Xiaoshe
    Wang, Yufei
    Zhang, Xingjun
    JOURNAL OF SUPERCOMPUTING, 2020, 76 (09): : 7290 - 7314
  • [7] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saunak Saha
    Henry Duwe
    Joseph Zambreno
    Journal of Signal Processing Systems, 2020, 92 : 907 - 929
  • [8] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saha, Saunak
    Duwe, Henry
    Zambreno, Joseph
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2020, 92 (09): : 907 - 929
  • [9] Eciton: Very Low-power Recurrent Neural Network Accelerator for Real-time Inference at the Edge
    Chen, Jeffrey
    Jun, Sang-Woo
    Hong, Sehwan
    He, Warrick
    Moon, Jinyeong
    ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2024, 17 (01)
  • [10] COMPUTING DISTORTION - METHOD FOR LOW-POWER TRANSISTOR AMPLIFIERS
    ARGUIMBA.LB
    FANGER, DM
    WIRELESS WORLD, 1968, 74 (1393): : 228 - &