A Representer Theorem for Deep Neural Networks

被引:0
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
splines; regularization; sparsity; learning; deep neural networks; activation functions; LINEAR INVERSE PROBLEMS; SPLINES; KERNELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with existing deep-ReLU, parametric ReLU, APL (adaptive piecewise-linear) and MaxOut architectures. It also suggests novel optimization challenges and makes an explicit link with l(1) minimization and sparsity-promoting techniques.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] Banach Space Representer Theorems for Neural Networks and Ridge Splines
    Parhi, Rahul
    Nowak, Robert D.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [2] A Unifying Representer Theorem for Inverse Problems and Machine Learning
    Unser, Michael
    FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2021, 21 (04) : 941 - 960
  • [3] An Epigraphical Approach to the Representer Theorem
    Duval, Vincent
    JOURNAL OF CONVEX ANALYSIS, 2021, 28 (03) : 819 - 836
  • [4] Representer Theorem for Learning Koopman Operators
    Khosravi, Mohammad
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2995 - 3010
  • [5] Learning Activation Functions in Deep (Spline) Neural Networks
    Bohra, Pakshal
    Campos, Joaquim
    Gupta, Harshit
    Aziznejad, Shayan
    Unser, Michael
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2020, 1 : 295 - 309
  • [6] A Unifying Representer Theorem for Inverse Problems and Machine Learning
    Michael Unser
    Foundations of Computational Mathematics, 2021, 21 : 941 - 960
  • [7] When Is There a Representer Theorem? Vector Versus Matrix Regularizers
    Argyriou, Andreas
    Micchelli, Charles A.
    Pontil, Massimiliano
    JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 2507 - 2529
  • [8] Weight normalized deep neural networks
    Xu, Yixi
    Wang, Xiao
    STAT, 2021, 10 (01):
  • [9] Least-squares collocation: a spherical harmonic representer theorem
    Chang, Guobin
    Bian, Shaofeng
    GEOPHYSICAL JOURNAL INTERNATIONAL, 2023, 234 (02) : 879 - 886
  • [10] What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory\ast
    Parhi, Rahul
    Nowak, Robert D.
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2022, 4 (02): : 464 - 489