Efficient and sparse neural networks by pruning weights in a multiobjective learning approach

被引:19
作者
Reiners, Malena [1 ]
Klamroth, Kathrin [1 ]
Heldmann, Fabian [1 ]
Stiglmayr, Michael [1 ]
机构
[1] Univ Wuppertal, Sch Math & Nat Sci, Wuppertal, Germany
关键词
Multiobjective learning; Unstructured pruning; Stochastic multi-gradient descent; l(1)-regularization; Automated machine learning;
D O I
10.1016/j.cor.2021.105676
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Overparameterization and overfitting are common concerns when designing and training deep neural networks, that are often counteracted by pruning and regularization strategies. However, these strategies remain secondary to most learning approaches and suffer from time and computational intensive procedures. We suggest a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions in a biobjective optimization problem. As a showcase example, we use the cross entropy as a measure of the prediction accuracy while adopting an l(1)-penalty function to assess the total cost (or complexity) of the network parameters. The latter is combined with an intra-training pruning approach that reinforces complexity reduction and requires only marginal extra computational cost. From the perspective of multiobjective optimization, this is a truly large-scale optimization problem. We compare two different optimization paradigms: On the one hand, we adopt a scalarizationbased approach that transforms the biobjective problem into a series of weighted-sum scalarizations. On the other hand we implement stochastic multi-gradient descent algorithms that generate a single Pareto optimal solution without requiring or using preference information. In the first case, favorable knee solutions are identified by repeated training runs with adaptively selected scalarization parameters. Numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with negligible loss of accuracy are possible.
引用
收藏
页数:16
相关论文
共 55 条
[1]   BICRITERIA TRANSPORTATION PROBLEM [J].
ANEJA, YP ;
NAIR, KPK .
MANAGEMENT SCIENCE, 1979, 25 (01) :73-78
[2]  
[Anonymous], 2015, DEEP COMPRESSION COM
[3]  
[Anonymous], 2006, STUDIES COMPUTATIONA
[4]  
Azarian K., 2020, ARXIV200300075V1 COR
[5]  
Bottou L. eon, 1999, Online Learning in Neural Networks, P9
[6]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[7]  
Braga AP, 2006, STUD COMP INTELL, V16, P151
[8]   Stochastic approach versus multiobjective approach for obtaining efficient solutions in stochastic multiobjective programming problems [J].
Caballero, R ;
Cerdá, B ;
Muñoz, MD ;
Rey, L .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2004, 158 (03) :633-648
[9]  
Chollet F., 2015, Keras
[10]  
Das I, 1999, STRUCT OPTIMIZATION, V18, P107, DOI 10.1007/s001580050111