A comprehensive survey on model compression and acceleration

被引:281
作者
Choudhary, Tejalal [1 ]
Mishra, Vipul [1 ]
Goswami, Anurag [1 ]
Sarangapani, Jagannathan [2 ]
机构
[1] Bennett Univ, Greater Noida, India
[2] Missouri Univ Sci & Technol, Rolla, MO 65409 USA
关键词
Model compression and acceleration; Machine learning; Deep learning; CNN; RNN; Resource-constrained devices; Efficient neural networks; NEURAL-NETWORK; PROXIMAL NEWTON; QUANTIZATION; CLASSIFICATION; ALGORITHM;
D O I
10.1007/s10462-020-09816-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, machine learning (ML) and deep learning (DL) have shown remarkable improvement in computer vision, natural language processing, stock prediction, forecasting, and audio processing to name a few. The size of the trained DL model is large for these complex tasks, which makes it difficult to deploy on resource-constrained devices. For instance, size of the pre-trained VGG16 model trained on the ImageNet dataset is more than 500 MB. Resource-constrained devices such as mobile phones and internet of things devices have limited memory and less computation power. For real-time applications, the trained models should be deployed on resource-constrained devices. Popular convolutional neural network models have millions of parameters that leads to increase in the size of the trained model. Hence, it becomes essential to compress and accelerate these models before deploying on resource-constrained devices while making the least compromise with the model accuracy. It is a challenging task to retain the same accuracy after compressing the model. To address this challenge, in the last couple of years many researchers have suggested different techniques for model compression and acceleration. In this paper, we have presented a survey of various techniques suggested for compressing and accelerating the ML and DL models. We have also discussed the challenges of the existing techniques and have provided future research directions in the field.
引用
收藏
页码:5113 / 5155
页数:43
相关论文
共 178 条
  • [71] Kim M., 2016, INT C MACH LEARN ICM
  • [72] Kim M, 2018, SIGNALS COMMUN TECHN, P187, DOI 10.1007/978-3-319-73031-8_8
  • [73] Kim Y. D., 2016, P 4 INT C LEARN REPR
  • [74] Krizhevsky A., 2014, One weird trick for parallelizing convolutional neural networks
  • [75] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [76] Krizhevsky Alex, 2009, University of Toronto
  • [77] Kumar A, 2017, PR MACH LEARN RES, V70
  • [78] Kusupati A., 2018, ADV NEUR IN, P9031, DOI 10.5555/3327546.3327577
  • [79] Lan X, 2018, ADV NEUR IN, V31
  • [80] Le Q., 2013, ICML