Learning of Continuous and Piecewise-Linear Functions With Hessian Total-Variation Regularization

被引:8
作者
Campos, Joaquim [1 ]
Aziznejad, Shayan [1 ]
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
来源
IEEE OPEN JOURNAL OF SIGNAL PROCESSING | 2022年 / 3卷
基金
瑞士国家科学基金会; 欧洲研究理事会;
关键词
Splines (mathematics); TV; Neural networks; Junctions; Supervised learning; Signal processing; Search problems; Box splines; barycentric coordinates; supervised learning; sparsity; variational methods; INVERSE PROBLEMS; NEURAL-NETWORKS; BOX-SPLINES; RECONSTRUCTION; REGRESSION; SPACES;
D O I
10.1109/OJSP.2021.3136488
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We develop a novel 2D functional learning framework that employs a sparsity-promoting regularization based on second-order derivatives. Motivated by the nature of the regularizer, we restrict the search space to the span of piecewise-linear box splines shifted on a 2D lattice. Our formulation of the infinite-dimensional problem on this search space allows us to recast it exactly as a finite-dimensional one that can be solved using standard methods in convex optimization. Since our search space is composed of continuous and piecewise-linear functions, our work presents itself as an alternative to training networks that deploy rectified linear units, which also construct models in this family. The advantages of our method are fourfold: the ability to enforce sparsity, favoring models with fewer piecewise-linear regions; the use of a rotation, scale and translation-invariant regularization; a single hyperparameter that controls the complexity of the model; and a clear model interpretability that provides a straightforward relation between the parameters and the overall learned function. We validate our framework in various experimental setups and compare it with neural networks.
引用
收藏
页码:36 / 48
页数:13
相关论文
共 70 条
[1]   SAMPLING PROCEDURES IN FUNCTION-SPACES AND SYMPTOTIC EQUIVALENCE WITH SHANNON SAMPLING THEORY [J].
ALDROUBI, A ;
UNSER, M .
NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 1994, 15 (1-2) :1-21
[2]  
[Anonymous], 2006, 2006 IEEE INT C AC S
[3]   THEORY OF REPRODUCING KERNELS [J].
ARONSZAJN, N .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1950, 68 (MAY) :337-404
[4]  
Aziznejad S., 2021, ARXIV211206209
[5]   Multikernel Regression with Sparsity Constraint [J].
Aziznejad, Shayan ;
Unser, Michael .
SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2021, 3 (01) :201-224
[6]   Deep Neural Networks With Trainable Activations and Controlled Lipschitz Constant [J].
Aziznejad, Shayan ;
Gupta, Harshit ;
Campos, Joaquim ;
Unser, Michael .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 :4688-4699
[7]   Mad Max: Affine Spline Insights Into Deep Learning [J].
Balestriero, Randall ;
Baraniuk, Richard G. .
PROCEEDINGS OF THE IEEE, 2021, 109 (05) :704-727
[8]  
Balestriero Randall, 2018, PMLR, P374
[9]  
Basu A., 2016, Understanding deep neural networks with rectified linear units
[10]  
Bhatia M., 1997, Matrix Analysis