Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks

被引:12
|
作者
Parhi, Rahul [1 ,2 ]
Nowak, Robert D. D. [1 ]
机构
[1] Univ Wisconsin Madison, Dept Elect & Comp Engn, Madison, WI 53706 USA
[2] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
关键词
Estimation; Training; Biological neural networks; TV; Radon; Noise measurement; Neurons; Neural networks; ridge functions; sparsity; function approximation; nonparametric function estimation; NONPARAMETRIC REGRESSION; ASYMPTOTIC EQUIVALENCE; CONVERGENCE-RATES; APPROXIMATION; BOUNDS; MULTIVARIATE; SPLINES;
D O I
10.1109/TIT.2022.3208653
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks. The estimators we study minimize the sum of squared data-fitting errors plus a regularization term proportional to the squared Euclidean norm of the network weights. This minimization corresponds to the common approach of training a neural network with weight decay. We quantify the performance (mean-squared error) of these neural network estimators when the data-generating function belongs to the second-order Radon-domain bounded variation space. This space of functions was recently proposed as the natural function space associated with shallow ReLU neural networks. We derive a minimax lower bound for the estimation problem for this function space and show that the neural network estimators are minimax optimal up to logarithmic factors. This minimax rate is immune to the curse of dimensionality. We quantify an explicit gap between neural networks and linear methods (which include kernel methods) by deriving a linear minimax lower bound for the estimation problem, showing that linear methods necessarily suffer the curse of dimensionality in this function space. As a result, this paper sheds light on the phenomenon that neural networks seem to break the curse of dimensionality.
引用
收藏
页码:1125 / 1140
页数:16
相关论文
共 50 条
  • [1] Random Sketching for Neural Networks With ReLU
    Wang, Di
    Zeng, Jinshan
    Lin, Shao-Bo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (02) : 748 - 762
  • [2] Quantile regression with ReLU Networks: Estimators and minimax rates
    Padilla, Oscar Hernan Madrid
    Tansey, Wesley
    Chen, Yanzhen
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [3] Nonparametric Regression Using Over-parameterized Shallow ReLU Neural Networks
    Yang, Yunfei
    Zhou, Ding-Xuan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 35
  • [4] Optimal approximation of piecewise smooth functions using deep ReLU neural networks
    Petersen, Philipp
    Voigtlaender, Felix
    NEURAL NETWORKS, 2018, 108 : 296 - 330
  • [5] On minimal representations of shallow ReLU networks
    Dereich, Steffen
    Kassing, Sebastian
    NEURAL NETWORKS, 2022, 148 : 121 - 128
  • [6] Weighted variation spaces and approximation by shallow ReLU networks
    Devore, Ronald
    Nowak, Robert D.
    Parhi, Rahul
    Siegel, Jonathan W.
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2025, 74
  • [7] Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
    Siegel, Jonathan W.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [8] Minimax Optimal Density Estimation Using a Shallow Generative Model with a One-Dimensional Latent Variable
    Kwon, Hyeok Kyu
    Chae, Minwoo
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [9] Locally linear attributes of ReLU neural networks
    Sattelberg, Ben
    Cavalieri, Renzo
    Kirby, Michael
    Peterson, Chris
    Beveridge, Ross
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [10] Optimal Rates of Approximation by Shallow ReLUk Neural Networks and Applications to Nonparametric Regression
    Yang, Yunfei
    Zhou, Ding-Xuan
    CONSTRUCTIVE APPROXIMATION, 2024,