Sobolev-Type Embeddings for Neural Network Approximation Spaces

被引:0
作者
Philipp Grohs
Felix Voigtlaender
机构
[1] University of Vienna,Faculty of Mathematics
[2] Research Network Data Science @ Uni Vienna,Department of Mathematics
[3] Johann Radon Institute,undefined
[4] Technical University of Munich,undefined
来源
Constructive Approximation | 2023年 / 57卷
关键词
Deep neural networks; Approximation spaces; Hölder spaces; Embedding theorems; Optimal learning algorithms; Primary: 68T07; 46E35; Secondary: 65D05; 46E30;
D O I
暂无
中图分类号
学科分类号
摘要
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in Lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^p$$\end{document}) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers. We prove embedding theorems between these spaces for different values of p. Furthermore, we derive sharp embeddings of these approximation spaces into Hölder spaces. We find that, analogous to the case of classical function spaces (such as Sobolev spaces, or Besov spaces) it is possible to trade “smoothness” (i.e., approximation rate) for increased integrability. Combined with our earlier results in Grohs and Voigtlaender (Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces, 2021. arXiv preprint arXiv:2104.02746), our embedding theorems imply a somewhat surprising fact related to “learning” functions from a given neural network space based on point samples: if accuracy is measured with respect to the uniform norm, then an optimal “learning” algorithm for reconstructing functions that are well approximable by ReLU neural networks is simply given by piecewise constant interpolation on a tensor product grid.
引用
收藏
页码:579 / 599
页数:20
相关论文
共 50 条
[41]   NEURAL NETWORK APPROXIMATION AND ESTIMATION OF CLASSIFIERS WITH CLASSIFICATION BOUNDARY IN A BARRON CLASS [J].
Caragea, Andrei ;
Petersen, Philipp ;
Voigtlaender, Felix .
ANNALS OF APPLIED PROBABILITY, 2023, 33 (04) :3039-3079
[42]   Function spaces of Lorentz-Sobolev type: Atomic decompositions, characterizations in terms of wavelets, interpolation and multiplications [J].
Besoy, Blanca F. ;
Cobos, Fernando .
JOURNAL OF FUNCTIONAL ANALYSIS, 2022, 282 (12)
[43]   Deep neural network context embeddings for model selection in rich-context HMM synthesis [J].
Merritt, Thomas ;
Yamagishi, Junichi ;
Wu, Zhizheng ;
Watts, Oliver ;
King, Simon .
16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, :2207-2211
[44]   Using stochastic programming to train neural network approximation of nonlinear MPC laws [J].
Li, Yun ;
Hua, Kaixun ;
Cao, Yankai .
AUTOMATICA, 2022, 146
[45]   DPK: Deep Neural Network Approximation of the First Piola-Kirchhoff Stress [J].
Hu, Tianyi ;
Yang, Jerry Zhijian ;
Yuan, Cheng .
ADVANCES IN APPLIED MATHEMATICS AND MECHANICS, 2024, 16 (01) :75-100
[46]   ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining [J].
Mrazek, Vojtech ;
Vasicek, Zdenek ;
Sekanina, Lukas ;
Hanif, Muhammad Abdullah ;
Shafique, Muhammad .
2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
[47]   Using a Neural Network Codec Approximation Loss to Improve Source Separation Performance in Limited Capacity Networks [J].
Ananthabhotla, Ishwarya ;
Ewert, Sebastian ;
Paradiso, Joseph A. .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[48]   Near-optimal deep neural network approximation for Korobov functions with respect to Ip and H1 norms [J].
Yang, Yahong ;
Lu, Yulong .
NEURAL NETWORKS, 2024, 180
[49]   Medical Image Diagnosis of Lung Cancer by Deep Feedback GMDH-Type Neural Network [J].
Kondo, Tadashi ;
Ueno, Junji ;
Takao, Shoichiro .
PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE AND ROBOTICS (ICAROB 2016), 2016, :125-129
[50]   Medical Image Diagnosis of Lung Cancer by Deep Feedback GMDH-Type Neural Network [J].
Kondo, Tadashi ;
Ueno, Junji ;
Takao, Shoichiro .
JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE, 2016, 2 (04) :252-257