Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

被引:33
作者
Jin, Pengzhan [1 ,2 ]
Lu, Lu [3 ]
Tang, Yifa [1 ,2 ]
Karniadakis, George Em [3 ]
机构
[1] Chinese Acad Sci, Acad Math & Syst Sci, ICMSEC, LSEC, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Brown Univ, Div Appl Math, Providence, RI 02912 USA
基金
中国国家自然科学基金;
关键词
Neural networks; Generalization error; Learnability; Data distribution; Cover complexity; Neural network smoothness;
D O I
10.1016/j.neunet.2020.06.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much is known about the theory of generalization. Most existing theoretical works for generalization to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. Although most of the analysis is general and not specific to neural networks, we validate our theoretical assumptions and results numerically for neural networks by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we demonstrate empirically that the neural network smoothness decreases when the network size increases whereas the smoothness is insensitive to training dataset size. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页码:85 / 99
页数:15
相关论文
共 61 条
  • [21] Gonen A., 2017, ARXIV170104271, P1043
  • [22] Gunasekar S, 2018, ADV NEUR IN, V31
  • [23] Hardt Moritz, 2015, ARXIV150901240
  • [24] MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS
    HORNIK, K
    STINCHCOMBE, M
    WHITE, H
    [J]. NEURAL NETWORKS, 1989, 2 (05) : 359 - 366
  • [25] John Xu Z-Q., 2019, arXiv preprint arXiv:1901.06523
  • [26] Kawaguchi K., 2017, Generalization in deep learning
  • [27] Keskar Nitish Shirish, 2016, ARXIV
  • [28] Kingma J., 2014, Adam: A method for stochastic optimization
  • [29] Krizhevsky A., 2009, Learning multiple layers of features from tiny images
  • [30] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90