Scaling for edge inference of deep neural networks

被引:328
作者
Xu, Xiaowei [1 ]
Ding, Yukun [1 ]
Hu, Sharon Xiaobo [1 ]
Niemier, Michael [1 ]
Cong, Jason [2 ]
Hu, Yu [3 ]
Shi, Yiyu [1 ]
机构
[1] Univ Notre Dame, Dept Comp Sci, Notre Dame, IN 46556 USA
[2] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
[3] Huazhong Univ Sci & Technol, Sch Opt & Elect Informat, Wuhan, Hubei, Peoples R China
关键词
ENERGY;
D O I
10.1038/s41928-018-0059-3
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks offer considerable potential across a range of applications, from advanced manufacturing to autonomous cars. A clear trend in deep neural networks is the exponential growth of network size and the associated increases in computational complexity and memory consumption. However, the performance and energy efficiency of edge inference, in which the inference (the application of a trained network to new data) is performed locally on embedded platforms that have limited area and power budget, is bounded by technology scaling. Here we analyse recent data and show that there are increasing gaps between the computational complexity and energy efficiency required by data scientists and the hardware capacity made available by hardware architects. We then discuss various architecture and algorithm innovations that could help to bridge the gaps.
引用
收藏
页码:216 / 222
页数:7
相关论文
共 119 条
[111]  
Zhang C, 2015, P 2015 ACM SIGDA INT, P161, DOI 10.1145/2684746.2689060
[112]   Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System [J].
Zhang, Chi ;
Prasanna, Viktor .
FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2017, :35-44
[113]  
Zhang J., 2017, Enabling extreme energy efficiency via timing speculation for deep neural network accelerators
[114]   In-Memory Computation of a Machine-Learning Classifier in a Standard 6T SRAM Array [J].
Zhang, Jintao ;
Wang, Zhuo ;
Verma, Naveen .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (04) :915-924
[115]  
ZHANG L, 2017, SCI REP-UK, V7, DOI [10.1038/s41598-017-02365-0, DOI 10.1038/S41598-017-02365-0]
[116]   ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [J].
Zhang, Xiangyu ;
Zhou, Xinyu ;
Lin, Mengxiao ;
Sun, Ran .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6848-6856
[117]  
Zhou A., 2017, INT C LEARNING REPRE
[118]  
Zhou S., 2016, CORR
[119]  
Zhu C., 2016, Trained ternary quantization