Energy-Efficient Approximate Edge Inference Systems

被引:10
作者
Ghosh, Soumendu Kumar [1 ]
Raha, Arnab [2 ]
Raghunathan, Vijay [1 ]
机构
[1] Purdue Univ, Elmore Family Sch Elect & Comp Engn, 610 Purdue Mall, W Lafayette, IN 47907 USA
[2] Intel Corp, 2200 Mission Coll Blvd, Santa Clara, CA 95054 USA
关键词
Approximate computing; approximate systems; deep learning; DRAM; edge AI; edge-to-cloud computing; energy efficiency; quality-aware pruning; quality-energy tradeoff; CMOS IMAGE SENSOR; NEURAL-NETWORKS; PERFORMANCE; CHALLENGES;
D O I
10.1145/3589766
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid proliferation of the Internet of Things and the dramatic resurgence of artificial intelligence based application workloads have led to immense interest in performing inference on energy-constrained edge devices. Approximate computing (a design paradigm that trades off a small degradation in application quality for disproportionate energy savings) is a promising technique to enable energy-efficient inference at the edge. This article introduces the concept of an approximate edge inference system (AxIS) and proposes a systematic methodology to perform joint approximations between different subsystems in a deep neural network (DNN)-based edge inference system, leading to significant energy benefits compared to approximating individual subsystems in isolation. We use a smart camera system that executes various DNN-based image classification and object detection applications to illustrate how the sensor, memory, compute, and communication subsystems can all be approximated synergistically. We demonstrate our proposed methodology using two variants of a smart camera system: (a) Cam(Edge), where the DNN is executed locally on the edge device, and (b) CamCloud, where the edge device sends the captured image to a remote cloud server that executes the DNN. We have prototyped such an approximate inference system using an Intel Stratix IV GX-based Terasic TR4-230 FPGA development board. Experimental results obtained using six large DNNs and four compact DNNs running image classification applications demonstrate significant energy savings (approximate to 1.6x-4.7x for large DNNs and approximate to 1.5x-3.6x for small DNNs), for minimal (<1%) loss in application-level quality. Furthermore, results using four object detection DNNs exhibit energy savings of approximate to 1.5x-5.2x for similar quality loss. Compared to approximating a single subsystem in isolation, AxIS achieves 1.05x-3.25x gains in energy savings for image classification and 1.35x-4.2x gains for object detection on average, for minimal (<1%) application-level quality loss.
引用
收藏
页数:50
相关论文
共 111 条
[1]  
[Anonymous], 2019, Facebook
[2]  
[Anonymous], 2013, MOBISYS13, DOI DOI 10.1145/2462456.2464448
[3]  
Banner R, 2019, ADV NEUR IN, V32
[4]  
Ben Abdesslem F, 2009, MOBIHELD 09, P61
[5]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[6]  
Chen CY, 2018, DES AUT TEST EUROPE, P821, DOI 10.23919/DATE.2018.8342119
[7]   AdderNet: Do We Really Need Multiplications in Deep Learning? [J].
Chen, Hanting ;
Wang, Yunhe ;
Xu, Chunjing ;
Shi, Boxin ;
Xu, Chao ;
Tian, Qi ;
Xu, Chang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1465-1474
[8]  
Chen K, 2019, Arxiv, DOI arXiv:1906.07155
[9]   Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks [J].
Chen, Yu-Hsin ;
Emer, Joel ;
Sze, Vivienne .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :367-379
[10]  
Chen Z, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P6742, DOI 10.1109/ICASSP.2018.8461630