Sparse Deep Neural Network Optimization for Embedded Intelligence

被引:3
作者
Bi, Jia [1 ]
Gunn, Steve R. [1 ]
机构
[1] Univ Southampton, Sch Elect & Comp Sci, Southampton SO17 1BJ, Hants, England
关键词
First order optimization; l(1) regularization; model compression; deep neural network; embedded systems;
D O I
10.1142/S0218213020600027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoretically and experimentally that sparsity-inducing regularization can be effectively worked with the VR-based optimization whereby in the optimizer the behaviors of the stochastic element is controlled by a hyper-parameter to solve non-convex problems.
引用
收藏
页数:26
相关论文
共 33 条
[1]  
[Anonymous], 2016, CORR
[2]  
[Anonymous], 1992, NIPS 91 P 4 INT C NE
[3]  
[Anonymous], 2009, P JOINT C 47 ANN M A
[4]  
[Anonymous], 2015, CORR
[5]  
[Anonymous], 2016, INT C LEARNING REPRE
[6]  
[Anonymous], 2014, CORR
[7]  
[Anonymous], 2016, CORR
[8]  
[Anonymous], 2016, ARXIV
[9]  
[Anonymous], 2016, CORR
[10]  
[Anonymous], 2012, Advances in neural Information processing systems 25