From Kernel Methods to Neural Networks: A Unifying Variational Formulation

被引:1
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne EPFL, Biomed Imaging Grp, Stn 17, CH-1015 Lausanne, Switzerland
关键词
Machine learning; Convex optimization; Regularization; Representer theorem; Kernel methods; Neural networks; Banach space; APPROXIMATION; SPLINES; INTERPOLATION; REGRESSION; TRANSFORM;
D O I
10.1007/s10208-023-09624-9
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator L\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{L}$$\end{document} and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms ||center dot||=||center dot||Lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \cdot \Vert =\Vert \cdot \Vert _{L_p}$$\end{document} with p is an element of(1,2]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\in (1,2]$$\end{document}. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.
引用
收藏
页码:1779 / 1818
页数:40
相关论文
共 50 条
  • [1] Designing rotationally invariant neural networks from PDEs and variational methods
    Alt, Tobias
    Schrader, Karl
    Weickert, Joachim
    Peter, Pascal
    Augustin, Matthias
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2022, 9 (03)
  • [2] An introduction to recursive neural networks and kernel methods for cheminformatics
    Micheli, Alessio
    Sperduti, Alessandro
    Starita, Antonina
    CURRENT PHARMACEUTICAL DESIGN, 2007, 13 (14) : 1469 - 1495
  • [3] Understanding neural networks with reproducing kernel Banach spaces
    Bartolucci, Francesca
    De Vito, Ernesto
    Rosasco, Lorenzo
    Vigogna, Stefano
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2023, 62 : 194 - 236
  • [4] Designing rotationally invariant neural networks from PDEs and variational methods
    Tobias Alt
    Karl Schrader
    Joachim Weickert
    Pascal Peter
    Matthias Augustin
    Research in the Mathematical Sciences, 2022, 9
  • [5] When do neural networks outperform kernel methods?*
    Ghorbani, Behrooz
    Mei, Song
    Misiakiewicz, Theodor
    Montanari, Andrea
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2021, 2021 (12):
  • [6] What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory\ast
    Parhi, Rahul
    Nowak, Robert D.
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2022, 4 (02): : 464 - 489
  • [7] A Variational Algorithm for Quantum Neural Networks
    Macaluso, Antonio
    Clissa, Luca
    Lodi, Stefano
    Sartori, Claudio
    COMPUTATIONAL SCIENCE - ICCS 2020, PT VI, 2020, 12142 : 591 - 604
  • [8] Neural lasso: a unifying approach of lasso and neural networks
    Curbelo, Ernesto
    Delgado-Gomez, David
    Carreras, Danae
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [9] Statistical foundation of Variational Bayes neural networks
    Bhattacharya, Shrijita
    Maiti, Tapabrata
    NEURAL NETWORKS, 2021, 137 (137) : 151 - 173
  • [10] Cross-domain neural-kernel networks
    Mehrkanoon, Siamak
    PATTERN RECOGNITION LETTERS, 2019, 125 (474-480) : 474 - 480