Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications

被引:64
作者
Cai, Han [1 ]
Lin, Ji [1 ]
Lin, Yujun [1 ]
Liu, Zhijian [1 ]
Tang, Haotian [1 ]
Wang, Hanrui [1 ]
Zhu, Ligeng [1 ]
Han, Song [1 ]
机构
[1] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
Efficient deep learning; TinyML; model compression; AutoML; neural architecture search; NEURAL-NETWORK ACCELERATOR; ARCHITECTURE; IMPLEMENTATION; COPROCESSOR; PREDICTION; MODEL;
D O I
10.1145/3486618
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing, and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand to enable numerous edge AI applications. This article provides an overview of efficient deep learning methods, systems, and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization, as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video, and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
引用
收藏
页数:50
相关论文
共 367 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Ahmadzadeh Mohsen, 2021, ARXIV PREPRINT ARXIV
[3]  
Akoury Nader, 2019, C ASS COMP LING
[4]   Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing [J].
Albericio, Jorge ;
Judd, Patrick ;
Hetherington, Tayler ;
Aamodt, Tor ;
Jerger, Natalie Enright ;
Moshovos, Andreas .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :1-13
[5]  
Amini A, 2019, IEEE INT CONF ROBOT, P8958, DOI [10.1109/icra.2019.8793579, 10.1109/ICRA.2019.8793579]
[6]  
Ando K, 2017, SYMP VLSI CIRCUITS, pC24, DOI 10.23919/VLSIC.2017.8008533
[7]   YodaNN1 : An Ultra-Low Power Convolutional Neural Network Accelerator Based on Binary Weights [J].
Andri, Renzo ;
Cavigelli, Lukas ;
Rossi, Davide ;
Benini, Luca .
2016 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2016, :236-241
[8]  
[Anonymous], 2015, PROC INT C LEARNING
[9]  
[Anonymous], 2009, Rep. TR-2009
[10]  
[Anonymous], 2014, C NEUR INF PROC SYST