Adaptive sparse and dense hybrid representation with nonconvex optimization

被引:0
作者
Wang, Xuejun [1 ]
Cao, Feilong [2 ]
Wang, Wenjian [3 ]
机构
[1] Shanxi Univ, Sch Comp Sci & Technol, Taiyuan 030006, Peoples R China
[2] China Jiliang Univ, Sch Sci, Hangzhou 310018, Peoples R China
[3] Minist Educ, Key Lab Computat Intelligence & Chinese Informat, Taiyuan 030006, Peoples R China
基金
中国国家自然科学基金;
关键词
sparse representation; trace norm; nonconvex optimization; low rank matrix recovery; iteratively reweighted nuclear norm; ROBUST FACE RECOGNITION; RECONSTRUCTION; INCOHERENCE; SELECTION; EQUATIONS; RECOVERY; SYSTEMS;
D O I
10.1007/s11704-019-7200-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse representation has been widely used in signal processing, pattern recognition and computer vision etc. Excellent achievements have been made in both theoretical researches and practical applications. However, there are two limitations on the application of classification. One is that sufficient training samples are required for each class, and the other is that samples should be uncorrupted. In order to alleviate above problems, a sparse and dense hybrid representation (SDR) framework has been proposed, where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary. SDR puts l(1) constraint on the coefficients of class-specific dictionary. Nevertheless, it over-emphasizes the sparsity and overlooks the correlation information in class-specific dictionary, which may lead to poor classification results. To overcome this disadvantage, an adaptive sparse and dense hybrid representation with non-convex optimization (ASDR-NO) is proposed in this paper. The trace norm is adopted in class-specific dictionary, which is different from general approaches. By doing so, the dictionary structure becomes adaptive and the representation ability of the dictionary will be improved. Meanwhile, a non-convex surrogate is used to approximate the rank function in dictionary decomposition in order to avoid a suboptimal solution of the original rank minimization, which can be solved by iteratively reweighted nuclear norm (IRNN) algorithm. Extensive experiments conducted on benchmark data sets have verified the effectiveness and advancement of the proposed algorithm compared with the state-of-the-art sparse representation methods.
引用
收藏
页数:14
相关论文
共 43 条
[1]  
[Anonymous], 2010, arXiv:1009.5055
[2]  
[Anonymous], 2011, P ADV NEUR INF PROC
[3]   From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images [J].
Bruckstein, Alfred M. ;
Donoho, David L. ;
Elad, Michael .
SIAM REVIEW, 2009, 51 (01) :34-81
[4]   A SINGULAR VALUE THRESHOLDING ALGORITHM FOR MATRIX COMPLETION [J].
Cai, Jian-Feng ;
Candes, Emmanuel J. ;
Shen, Zuowei .
SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (04) :1956-1982
[5]  
Candes E., 2009, J ACM, V58, P1101
[6]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[7]   Sparsity and incoherence in compressive sampling [J].
Candes, Emmanuel ;
Romberg, Justin .
INVERSE PROBLEMS, 2007, 23 (03) :969-985
[8]   Stable signal recovery from incomplete and inaccurate measurements [J].
Candes, Emmanuel J. ;
Romberg, Justin K. ;
Tao, Terence .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (08) :1207-1223
[9]  
Chen CF, 2012, PROC CVPR IEEE, P2618, DOI 10.1109/CVPR.2012.6247981
[10]   Discriminant waveletfaces and nearest feature classifiers for face recognition [J].
Chien, JT ;
Wu, CC .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (12) :1644-1649