A Class of Nonconvex Penalties Preserving Overall Convexity in Optimization-Based Mean Filtering

被引:28
作者
Malek-Mohammadi, Mohammadreza [1 ,2 ]
Rojas, Cristian R. [1 ,2 ]
Wahlberg, Bo [1 ,2 ]
机构
[1] KTH Royal Inst Technol, Dept Automat Control, S-10044 Stockholm, Sweden
[2] KTH Royal Inst Technol, ACCESS Linnaeus Ctr, S-10044 Stockholm, Sweden
基金
瑞典研究理事会;
关键词
Change point recovery; mean filtering; nonconvex penalty; piecewise constant signal; sparse signal processing; total variation denoising; MODEL SELECTION; SPARSE SIGNALS; NOISE REMOVAL; MINIMIZATION; RECOVERY; APPROXIMATION; RELAXATION; ALGORITHMS; SHRINKAGE; LASSO;
D O I
10.1109/TSP.2016.2612179
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
l(1) mean filtering is a conventional, optimization-based method to estimate the positions of jumps in a piecewise constant signal perturbed by additive noise. In this method, the l(1) norm penalizes sparsity of the first-order derivative of the signal. Theoretical results, however, show that in some situations, which can occur frequently in practice, even when the jump amplitudes tend to infinity, the conventional method identifies false change points. This issue, which is referred to as the stair-casing problem herein, restricts practical importance of l(1) mean filtering. In this paper, sparsity is penalized more tightly than the l(1) norm by exploiting a certain class of nonconvex functions, while the strict convexity of the consequent optimization problem is preserved. This results in a higher performance in detecting change points. To theoretically justify the performance improvements over l(1) mean filtering, deterministic and stochastic sufficient conditions for exact change point recovery are derived. In particular, theoretical results show that in the stair-casing problem, our approach might be able to exclude the false change points, while l(1) mean filtering may fail. A number of numerical simulations assist to show superiority of our method over l(1) mean filtering and another state-of-the-art algorithm that promotes sparsity tighter than the l(1) norm. Specifically, it is shown that our approach can consistently detect change points when the jump amplitudes become sufficiently large, while the two other competitors cannot.
引用
收藏
页码:6650 / 6664
页数:15
相关论文
共 52 条
  • [1] [Anonymous], 2005, ANAL FINANCIAL TIME
  • [2] [Anonymous], 1985, Matrix Analysis
  • [3] [Anonymous], 2015, Linear and Nonlinear Programming
  • [4] [Anonymous], 1990, OPTIMIZATION NONSMOO
  • [5] [Anonymous], 1993, DETECTION ABRUPT CHA
  • [6] Barbero A, 2014, MODULAR PROXIMAL OPT
  • [7] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
    Beck, Amir
    Teboulle, Marc
    [J]. SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01): : 183 - 202
  • [8] SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
    Bickel, Peter J.
    Ritov, Ya'acov
    Tsybakov, Alexandre B.
    [J]. ANNALS OF STATISTICS, 2009, 37 (04) : 1705 - 1732
  • [9] NEAR-IDEAL MODEL SELECTION BY l1 MINIMIZATION
    Candes, Emmanuel J.
    Plan, Yaniv
    [J]. ANNALS OF STATISTICS, 2009, 37 (5A) : 2145 - 2177
  • [10] Enhancing Sparsity by Reweighted l1 Minimization
    Candes, Emmanuel J.
    Wakin, Michael B.
    Boyd, Stephen P.
    [J]. JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) : 877 - 905