HOW DO NOISE TAILS IMPACT ON DEEP RELU NETWORKS?

被引:1
作者
Fan, Jianqian [1 ]
Gu, Yihong [1 ]
Zhou, Wen-Xin [2 ]
机构
[1] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
[2] Univ Illinois, Dept Informat & Decis Sci, Chicago, IL USA
关键词
Robustness; truncation; heavy tails; optimal rates; approximablility of ReLU net-; GEOMETRIZING RATES; CONVERGENCE-RATES; NEURAL-NETWORKS; REGRESSION; APPROXIMATION; ROBUST; BOUNDS;
D O I
10.1214/24-AOS2428
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This paper investigates the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite pth moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks.
引用
收藏
页码:1845 / 1871
页数:27
相关论文
共 50 条
  • [21] Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
    Gonon, Lukas
    Schwab, Christoph
    ANALYSIS AND APPLICATIONS, 2023, 21 (01) : 1 - 47
  • [22] Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
    Siegel, Jonathan W.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [23] DISCUSSION OF: "NONPARAMETRIC REGRESSION USING DEEP NEURAL NETWORKS WITH RELU ACTIVATION FUNCTION"
    Kutyniok, Gitta
    ANNALS OF STATISTICS, 2020, 48 (04) : 1902 - 1905
  • [24] Nonparametric regression on low-dimensional manifolds using deep ReLU networks: function approximation and statistical recovery
    Chen, Minshuo
    Jiang, Haoming
    Liao, Wenjing
    Zhao, Tuo
    INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2022, 11 (04) : 1203 - 1253
  • [25] Biologically plausible deep learning - But how far can we go with shallow networks?
    Illing, Bernd
    Gerstner, Wulfram
    Brea, Johanni
    NEURAL NETWORKS, 2019, 118 : 90 - 101
  • [26] Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks
    Cao, Yuan
    Gu, Quanquan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3349 - 3356
  • [27] Impact of Training Set Size on the Ability of Deep Neural Networks to Deal with Omission Noise
    Guetter, Jonas
    Kruspe, Anna
    Zhu, Xiao Xiang
    Niebling, Julia
    FRONTIERS IN REMOTE SENSING, 2022, 3
  • [28] Training Robust Deep Neural Networks via Adversarial Noise Propagation
    Liu, Aishan
    Liu, Xianglong
    Yu, Hang
    Zhang, Chongzhi
    Liu, Qiang
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5769 - 5781
  • [29] Low dimensional approximation and generalization of multivariate functionson smooth manifolds using deep ReLU neural networks
    Labate, Demetrio
    Shi, Ji
    NEURAL NETWORKS, 2024, 174
  • [30] Effect Of Injected Noise In Deep Neural Networks
    Nagabushan, Naresh
    Satish, Nishank
    Raghuram, S.
    2016 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH, 2016, : 134 - 138