Generalized and Robust Least Squares Regression

被引:7
作者
Wang, Jingyu [1 ,2 ]
Xie, Fangyuan [1 ,2 ]
Nie, Feiping [1 ,3 ]
Li, Xuelong [1 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Artificial Intelligence OPt & Elect iOPEN, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Sch Astronaut, Xian 710072, Peoples R China
[3] Northwestern Polytech Univ, Key Lab Intelligent Interact & Applicat, Minist Ind & Informat Technol, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Robustness; Feature extraction; Sparse matrices; Noise measurement; Task analysis; Iterative methods; Classification; feature selection (FS); generalized framework; linear regression; robust; NEURAL-NETWORK APPROACH; NEURODYNAMIC APPROACH; BOLTZMANN MACHINES; PARALLEL ALGORITHM; GENETIC ALGORITHM; LOCAL SEARCH; LARGE-SCALE; OPTIMIZATION; MODEL; COMBINATORIAL;
D O I
10.1109/TNNLS.2022.3213594
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a simple yet effective method, least squares regression (LSR) is extensively applied for data regression and classification. Combined with sparse representation, LSR can be extended to feature selection (FS) as well, in which $\ell _{1}$ regularization is often applied in embedded FS algorithms. However, because the loss function is in the form of squared error, LSR and its variants are sensitive to noises, which significantly degrades the effectiveness and performance of classification and FS. To cope with the problem, we propose a generalized and robust LSR (GRLSR) for classification and FS, which is made up of arbitrary concave loss function and the l(2,p)-norm regularization term. Meanwhile, an iterative algorithm is applied to efficiently deal with the nonconvex minimization problem, in which an additional weight to suppress the effect of noises is added to each data point. The weights can be automatically assigned according to the error of the samples. When the error is large, the value of the corresponding weight is small. It is this mechanism that allows GRLSR to reduce the impact of noises and outliers. According to the different formulations of the concave loss function, four specific methods are proposed to clarify the essence of the framework. Comprehensive experiments on corrupted datasets have proven the advantage of the proposed method.
引用
收藏
页码:7006 / 7020
页数:15
相关论文
共 113 条
[11]   Some new indexes of cluster validity [J].
Bezdek, JC ;
Pal, NR .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 1998, 28 (03) :301-315
[12]   Microcode optimization with neural networks [J].
Bharitkar, S ;
Tsuchiya, K ;
Takefuji, Y .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (03) :698-703
[13]   Solving the capacitated clustering problem with variable neighborhood search [J].
Brimberg, Jack ;
Mladenovic, Nenad ;
Todosijevic, Raca ;
Urosevic, Dragan .
ANNALS OF OPERATIONS RESEARCH, 2019, 272 (1-2) :289-321
[14]   Extended GRASP-Capacitated K-Means Clustering Algorithm to Establish Humanitarian Support Centers in Large Regions at Risk in Mexico [J].
Caballero-Morales, Santiago-Omar ;
Barojas-Payan, Erika ;
Sanchez-Partida, Diana ;
Martinez-Flores, Jose-Luis .
JOURNAL OF OPTIMIZATION, 2018, 2018
[15]  
Caliski T., 1974, Communications in Statistics-theory and Methods, V3, P1, DOI [DOI 10.1080/03610927408827101, 10.1080/03610927408827101]
[16]   Adaptive biased random-key genetic algorithm with local search for the capacitated centered clustering problem [J].
Chaves, Antonio Augusto ;
Goncalves, Jose Fernando ;
Nogueira Lorena, Luiz Antonio .
COMPUTERS & INDUSTRIAL ENGINEERING, 2018, 124 :331-346
[17]   Hybrid evolutionary algorithm for the Capacitated Centered Clustering Problem [J].
Chaves, Antonio Augusto ;
Nogueira Lorena, Luiz Antonio .
EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (05) :5013-5018
[18]   Clustering search algorithm for the capacitated centered clustering problem [J].
Chaves, Antonio Augusto ;
Nogueira Lorena, Luiz Antonio .
COMPUTERS & OPERATIONS RESEARCH, 2010, 37 (03) :552-558
[19]   Bicriteria Sparse Nonnegative Matrix Factorization via Two-Timescale Duplex Neurodynamic Optimization [J].
Che, Hangjun ;
Wang, Jun ;
Cichocki, Andrzej .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) :4881-4891
[20]   A Two-Timescale Duplex Neurodynamic Approach to Mixed-Integer Optimization [J].
Che, Hangjun ;
Wang, Jun .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) :36-48