Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming

被引:227
作者
Gasso, Gilles [1 ]
Rakotomamonjy, Alain [1 ]
Canu, Stephane [1 ]
机构
[1] Univ Rouen, INSA, EA 4108, LITIS, F-76801 St Etienne, France
关键词
DC programming; lasso; nonconvex regularization; signal representation; sparsity; variable selection; NONCONCAVE PENALIZED LIKELIHOOD; VARIABLE SELECTION; LASSO; RECONSTRUCTION; ALGORITHM; BRIDGE;
D O I
10.1109/TSP.2009.2026004
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper considers the problem of recovering a sparse signal representation according to a signal dictionary. This problem could be formalized as a penalized least-squares problem in which sparsity is usually induced by l(1)-norm penalty on the coefficients. Such an approach known as the Lasso or Basis Pursuit Denoising has been shown to perform reasonably well in some situations. However, it was also proved that nonconvex penalties like the pseudo l(q-)norm with q < 1 or smoothly clipped absolute deviation ( SCAD) penalty are able to recover sparsity in a more efficient way than the Lasso. Several algorithms have been proposed for solving the resulting nonconvex least-squares problem. This paper proposes a generic algorithm to address such a sparsity recovery problem for some class of nonconvex penalties. Our main contribution is that the proposed methodology is based on an iterative algorithm which solves at each iteration a convex weighted Lasso problem. It relies on the family of nonconvex penalties which can be decomposed as a difference of convex functions ( DC). This allows us to apply DC programming which is a generic and principled way for solving nonsmooth and nonconvex optimization problem. We also show that several algorithms in the literature dealing with nonconvex penalties are particular instances of our algorithm. Experimental results demonstrate the effectiveness of the proposed generic framework compared to existing algorithms, including iterative reweighted least-squares methods.
引用
收藏
页码:4686 / 4698
页数:13
相关论文
共 51 条
[1]  
AN LTH, 2001, DC PROGRAMMING APPRO, P231
[2]   Regularization of wavelet approximations - Rejoinder [J].
Antoniadis, A ;
Fan, J .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2001, 96 (455) :964-967
[3]  
Balakrishnan S, 2008, J MACH LEARN RES, V9, P313
[4]  
BOTTOU L, 2008, ADV NEURAL IN PRESS, V20
[5]  
Bühlmann P, 2008, ANN STAT, V36, P1534, DOI 10.1214/07-AOS0316A
[6]   Honest variable selection in linear and logistic regression models via l1 and l1 + l2 penalization [J].
Bunea, Florentina .
ELECTRONIC JOURNAL OF STATISTICS, 2008, 2 :1153-1194
[7]   Rejoinder:: The Dantzig selector:: Statistical estimation when p is much larger than n [J].
Candes, Emmanuel ;
Tao, Terence .
ANNALS OF STATISTICS, 2007, 35 (06) :2392-2404
[8]   Sparsity and incoherence in compressive sampling [J].
Candes, Emmanuel ;
Romberg, Justin .
INVERSE PROBLEMS, 2007, 23 (03) :969-985
[9]  
CANDS E, 2008, J FOURIER ANAL APPL
[10]  
CHARTRAND R, 2008, P 33 INT C AC SPEECH