Sparsest piecewise-linear regression of one-dimensional data

被引:9
作者
Debarre, Thomas [1 ]
Denoyelle, Quentin [1 ]
Unser, Michael [1 ]
Fageot, Julien [2 ]
机构
[1] Biomed Imaging Grp, EPFL, Lausanne, Switzerland
[2] AudioVisual Commun Lab, EPFL, Lausanne, Switzerland
基金
瑞士国家科学基金会; 欧洲研究理事会;
关键词
Inverse problems; Total-variation norm for measures; Sparsity; Splines; INVERSE PROBLEMS; OPTIMAL APPROXIMATION; REPRESENTER THEOREMS; SUPPORT RECOVERY; SUPERRESOLUTION; RECONSTRUCTION; DICTIONARIES; RESOLUTION; SHRINKAGE; NETWORKS;
D O I
10.1016/j.cam.2021.114044
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We study the problem of one-dimensional regression of data points with total-variation (TV) regularization (in the sense of measures) on the second derivative, which is known to promote piecewise-linear solutions with few knots. While there are efficient algorithms for determining such adaptive splines, the difficulty with TV regularization is that the solution is generally non-unique, an aspect that is often ignored in practice. In this paper, we present a systematic analysis that results in a complete description of the solution set with a clear distinction between the cases where the solution is unique and those, much more frequent, where it is not. For the latter scenario, we identify the sparsest solutions, i.e., those with the minimum number of knots, and we derive a formula to compute the minimum number of knots based solely on the data points. To achieve this, we first consider the problem of exact interpolation which leads to an easier theoretical analysis. Next, we relax the exact interpolation requirement to a regression setting, and we consider a penalized optimization problem with a strictly convex data-fidelity cost function. We show that the underlying penalized problem can be reformulated as a constrained problem, and thus that all our previous results still apply. Based on our theoretical analysis, we propose a simple and fast two-step algorithm, agnostic to uniqueness, to reach a sparsest solution of this penalized problem. (c) 2021 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:30
相关论文
共 100 条
[71]  
Mammen E, 1997, ANN STAT, V25, P387
[72]  
Mitchell T., 1997, Machine learning
[73]  
Montúfar G, 2014, ADV NEUR IN, V27
[74]  
Ongie G., 2020, INT C LEARN REPR
[75]   On the LASSO and its dual [J].
Osborne, MR ;
Presnell, B ;
Turlach, BA .
JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2000, 9 (02) :319-337
[76]  
Parhi R, 2021, J MACH LEARN RES, V22
[77]   The Role of Neural Network Activation Functions [J].
Parhi, Rahul ;
Nowak, Robert D. .
IEEE SIGNAL PROCESSING LETTERS, 2020, 27 :1779-1783
[78]  
Pascanu R., 2014, ICLR
[79]   Optimal approximation of piecewise smooth functions using deep ReLU neural networks [J].
Petersen, Philipp ;
Voigtlaender, Felix .
NEURAL NETWORKS, 2018, 108 :296-330
[80]   ON SMOOTHEST INTERPOLANTS [J].
PINKUS, A .
SIAM JOURNAL ON MATHEMATICAL ANALYSIS, 1988, 19 (06) :1431-1441