LHC: A Low-power Heterogeneous Computing Method on Neural Network Accelerator

被引:0
|
作者
Liu, Fangxin [1 ]
Xie, Kunpeng [1 ]
Gong, Cheng [1 ]
Liu, Shusheng [1 ]
Lu, Ye [1 ]
Li, Tao [1 ]
机构
[1] Nankai Univ, Coll Comp Sci, Intelligent Comp Syst Lab, Tianjin, Peoples R China
关键词
Heterogeneous Computing; Neural-Network Accelerator; Low-power Computing; Computational Task Assignment; Computing Adaptation;
D O I
10.1109/ICPADS47876.2019.00053
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Accelerators can achieve high performance and low energy consumption in training or inference of neural networks. If the Non-Neural Network (Non-NN) algorithms with large amount of computation could make full use of the accelerators, it is possible to speed up its implementation, reduce energy consumption, and achieve load balancing, especially on mobile devices equipped with accelerators. However, accelerators are dedicated to neural network calculations, so that other Non-NN algorithms have difficulty in using their advantages. Furthermore, many hardware-specific restrictions have become the obstacles, such as constrained precision of operands and limited computation scale. In this paper, we propose a method named Low-power Heterogeneous Computing (LHC) to bridge the gap between Non-NN algorithms and NN accelerators. Firstly, we analyze the general principle of the accelerator and reveal the calculation model of the accelerator. To hide the details of the underlying neural network library, we extract some operators from the limited number of types of neural network computation they support. We encapsulate the low-level library, extract operators suitable for general algorithms, and implement some more advanced operators that can adapt to the constrained hardware conditions. These operators could facilitate programmers to implement some Non-NN algorithms. In the aspect of the algorithm, we extract the computationally intensive parts of the Non-NN algorithm and deploy these computational tasks on the accelerator by calling the operators. To verify our method, we implement three Non-NN algorithms by using operators and adjusting these algorithms, include Grid-based Motions Statistics, k-Nearest Neighbors, and k-Means, on a specific accelerator, Cambricon-1A. The experimental results show that the energy consumption of calculation is reduced by up to 5.4x, compared with the CPU baseline. Our method can be further applied to other similar accelerators.
引用
收藏
页码:326 / 334
页数:9
相关论文
共 50 条
  • [21] A Low-Power Task Mapping Method for Network on Chip
    Cao, Wenwen
    Hu, Wei
    Wang, Puzhang
    Song, Mengke
    Li, Ruomiao
    2015 CHINESE AUTOMATION CONGRESS (CAC), 2015, : 1171 - 1176
  • [22] A Low-Power Spike-Like Neural Network Design
    Losh, Michael
    Llamocca, Daniel
    ELECTRONICS, 2019, 8 (12)
  • [23] Routing protocol for Low-Power and Lossy Networks for heterogeneous traffic network
    Arslan Musaddiq
    Yousaf Bin Zikria
    Sung Won Zulqarnain
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [24] Routing protocol for Low-Power and Lossy Networks for heterogeneous traffic network
    Musaddiq, Arslan
    Zikria, Yousaf Bin
    Zulqarnain
    Kim, Sung Won
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [25] A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
    Jokic, Petar
    Azarkhish, Erfan
    Bonetti, Andrea
    Pons, Marc
    Emery, Stephane
    Benini, Luca
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (05)
  • [26] Low-Power HW Accelerator for AI Edge-Computing in Human Activity Recognition Systems
    De Vita, Antonio
    Pau, Danilo
    Parrella, Claudio
    Di Benedetto, Luigi
    Rubino, Alfedo
    Licciardo, Gian Domenico
    2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 291 - 295
  • [27] Low-Power Computing with Neuromorphic Engineering
    Liu, Dingbang
    Yu, Hao
    Chai, Yang
    ADVANCED INTELLIGENT SYSTEMS, 2021, 3 (02)
  • [28] Constrained TSP and low-power computing
    Charikar, M
    Motwani, R
    Raghavan, P
    Silverstein, C
    ALGORITHMS AND DATA STRUCTURES, 1997, 1272 : 104 - 115
  • [29] A Low Power and Low Latency FPGA-Based Spiking Neural Network Accelerator
    Liu, Hanwen
    Chen, Yi
    Zeng, Zihang
    Zhang, Malu
    Qu, Hong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [30] EcoFlow: Efficient Convolutional Dataflows on Low-Power Neural Network Accelerators
    Orosa, Lois
    Koppula, Skanda
    Umuroglu, Yaman
    Kanellopoulos, Konstantinos
    Gomez-Luna, Juan
    Blott, Michaela
    Vissers, Kees
    Mutlu, Onur
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (09) : 2275 - 2289