High-dimensional dynamics of generalization error in neural networks

被引:126
作者
Advani, Madhu S. [1 ]
Saxe, Andrew M. [1 ,3 ]
Sompolinsky, Haim [1 ,2 ]
机构
[1] Harvard Univ, Ctr Brain Sci, Cambridge, MA 02138 USA
[2] Hebrew Univ Jerusalem, Edmond & Lily Safra Ctr Brain Sci, IL-91904 Jerusalem, Israel
[3] Univ Oxford, Dept Expt Psychol, Oxford OX2 6GG, England
基金
英国惠康基金;
关键词
Neural networks; Generalization error; Random matrix theory; STATISTICAL-MECHANICS; SINGULARITIES; GRADIENT;
D O I
10.1016/j.neunet.2020.08.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant "high-dimensional"regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that standard application of theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation. (c) 2020 The Authors. Published by Elsevier Ltd.
引用
收藏
页码:428 / 446
页数:19
相关论文
共 68 条
  • [1] Advani M., 2016, ADV NEURAL INFORM PR
  • [2] Advani M., 2016, STAT MECH HIGH DIMEN
  • [3] Statistical Mechanics of Optimal Convex Inference in High Dimensions
    Advani, Madhu
    Ganguli, Surya
    [J]. PHYSICAL REVIEW X, 2016, 6 (03):
  • [4] Statistical mechanics of complex neural systems and high dimensional data
    Advani, Madhu
    Lahiri, Subhaneil
    Ganguli, Surya
    [J]. JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2013,
  • [5] Asymptotic statistical theory of overtraining and cross-validation
    Amari, S
    Murata, N
    Muller, KR
    Finke, M
    Yang, HH
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (05): : 985 - 996
  • [6] Amari S.I., 1996, ADV NEURAL INFORM PR
  • [7] [Anonymous], 2016, NIPS
  • [8] [Anonymous], 2017, ICLR
  • [9] [Anonymous], 2015, Nature, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
  • [10] [Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123