Neural network for a class of sparse optimization with L0-regularization

被引:8
作者
Wei, Zhe [1 ,2 ]
Li, Qingfa [2 ]
Wei, Jiazhen [1 ]
Bian, Wei [1 ]
机构
[1] Harbin Inst Technol, Sch Math, Harbin, Peoples R China
[2] Heilongjiang Inst Technol, Dept Math, Harbin, Peoples R China
关键词
L-0-regularization; Projected neural network; Sparse optimization; Nonconvex optimization; Critical point; NEURODYNAMIC APPROACH; VARIABLE SELECTION; SIGNAL RECONSTRUCTION; NONSMOOTH; REGRESSION; ALGORITHM;
D O I
10.1016/j.neunet.2022.03.033
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse optimization involving the L-0-norm function as the regularization in objective function has a wide application in many fields. In this paper, we propose a projected neural network modeled by a differential equation to solve a class of these optimization problems, in which the objective function is the sum of a nonsmooth convex loss function and the regularization defined by the L-0-norm function. This optimization problem is not only nonconvex, but also discontinuous. To simplify the structure of the proposed network and let it own better convergence properties, we use the smoothing method, where the new constructed smoothing function for the regularization term plays a key role. We prove that the solution to the proposed network is globally existent and unique, and any accumulation point of it is a critical point of the continuous relaxation model. Except for a special case, which can be easily justified, any critical point is a local minimizer of the considered sparse optimization problem. It is an interesting thing that all critical points own a promising lower bound property, which is satisfied by all global minimizers of the considered problem, but is not by all local minimizers. Finally, we use some numerical experiments to illustrate the efficiency and good performance of the proposed method for solving this class of sparse optimization problems, which include the most widely used models in feature selection of classification learning. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
引用
收藏
页码:211 / 221
页数:11
相关论文
共 57 条
[1]   A SMOOTHING PROXIMAL GRADIENT ALGORITHM FOR NONSMOOTH CONVEX REGRESSION WITH CARDINALITY PENALTY [J].
Bian, Wei ;
Chen, Xiaojun .
SIAM JOURNAL ON NUMERICAL ANALYSIS, 2020, 58 (01) :858-883
[2]   Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization [J].
Bian, Wei ;
Chen, Xiaojun ;
Ye, Yinyu .
MATHEMATICAL PROGRAMMING, 2015, 149 (1-2) :301-327
[3]   Neural Network for Nonsmooth, Nonconvex Constrained Minimization Via Smooth Approximation [J].
Bian, Wei ;
Chen, Xiaojun .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (03) :545-556
[4]   Smoothing Neural Network for Constrained Non-Lipschitz Optimization With Applications [J].
Bian, Wei ;
Chen, Xiaojun .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (03) :399-411
[5]   High-Dimensional Statistics with a View Toward Applications in Biology [J].
Buehlmann, Peter ;
Kalisch, Markus ;
Meier, Lukas .
ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, VOL 1, 2014, 1 :255-U809
[6]   l0-based sparse canonical correlation analysis with application to cross-language document retrieval [J].
Cai, Jia ;
Dan, Wei ;
Zhang, Xiaowei .
NEUROCOMPUTING, 2019, 329 :32-45
[7]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[8]   Enhancing Sparsity by Reweighted l1 Minimization [J].
Candes, Emmanuel J. ;
Wakin, Michael B. ;
Boyd, Stephen P. .
JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) :877-905
[9]   Bicriteria Sparse Nonnegative Matrix Factorization via Two-Timescale Duplex Neurodynamic Optimization [J].
Che, Hangjun ;
Wang, Jun ;
Cichocki, Andrzej .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) :4881-4891
[10]   A collaborative neurodynamic approach to global and combinatorial optimization [J].
Che, Hangjun ;
Wang, Jun .
NEURAL NETWORKS, 2019, 114 :15-27