From Kernel Methods to Neural Networks: A Unifying Variational Formulation

被引:1
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne EPFL, Biomed Imaging Grp, Stn 17, CH-1015 Lausanne, Switzerland
关键词
Machine learning; Convex optimization; Regularization; Representer theorem; Kernel methods; Neural networks; Banach space; APPROXIMATION; SPLINES; INTERPOLATION; REGRESSION; TRANSFORM;
D O I
10.1007/s10208-023-09624-9
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator L\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{L}$$\end{document} and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms ||center dot||=||center dot||Lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \cdot \Vert =\Vert \cdot \Vert _{L_p}$$\end{document} with p is an element of(1,2]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\in (1,2]$$\end{document}. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.
引用
收藏
页码:1779 / 1818
页数:40
相关论文
共 50 条
  • [31] Neural networks as a unifying learning model for random normal form games
    Spiliopoulos, Leonidas
    ADAPTIVE BEHAVIOR, 2011, 19 (06) : 383 - 408
  • [32] Implementation and comparison of kernel-based learning methods to predict metabolic networks
    Roche-Lima A.
    Network Modeling Analysis in Health Informatics and Bioinformatics, 2016, 5 (1)
  • [33] Structured Dropout Variational Inference for Bayesian Neural Networks
    Son Nguyen
    Duong Nguyen
    Khai Nguyen
    Khoat Than
    Hung Bui
    Nhat Ho
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [34] Variational approach to unsupervised learning algorithms of neural networks
    Likhovidov, V
    NEURAL NETWORKS, 1997, 10 (02) : 273 - 289
  • [35] Beyond Transformers: fault type detection in maintenance tickets with Kernel Methods, Boost Decision Trees and Neural Networks
    Campese, Stefano
    Agostini, Federico
    Pazzini, Jacopo
    Pozza, Davide
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [36] A preliminary empirical comparison of recursive neural networks and tree kernel methods on regression tasks for tree structured domains
    Micheli, A
    Portera, F
    Sperduti, A
    NEUROCOMPUTING, 2005, 64 : 73 - 92
  • [37] Non intrusive reduced order modeling of parametrized PDEs by kernel POD and neural networks
    Salvador, M.
    Dede, L.
    Manzoni, A.
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2021, 104 : 1 - 13
  • [38] Kafnets: Kernel-based non-parametric activation functions for neural networks
    Scardapane, Simone
    Van Vaerenbergh, Steven
    Totaro, Simone
    Uncini, Aurelio
    NEURAL NETWORKS, 2019, 110 : 19 - 32
  • [39] Formulation of special fats by neural networks: A statistical approach
    Block, JM
    Barrera-Arellano, D
    Figueiredo, MF
    Gomide, FC
    Sauer, L
    JOURNAL OF THE AMERICAN OIL CHEMISTS SOCIETY, 1999, 76 (11) : 1357 - 1361
  • [40] Development of methods based on neural networks in the estimation of mineral resources
    Alberdi, Elisabete
    Hernandez, Heber
    Goti, Aitor
    DYNA, 2024, 99 (03): : 303 - 310