Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent

被引:0
作者
Holzmueller, David [1 ]
Steinwart, Ingo [1 ]
机构
[1] Univ Stuttgart, Fac Math & Phys, Inst Stochast & Applicat, Stuttgart, Germany
关键词
Neural networks; consistency; gradient descent; initialization; neural tangent kernel; LOCAL MINIMA; CONSISTENCY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We prove that two-layer (Leaky)ReLU networks initialized by e.g. the widely used method proposed by He et al. (2015) and trained using gradient descent on a least-squares loss are not universally consistent. Specifically, we describe a large class of one-dimensional data-generating distributions for which, with high probability, gradient descent only finds a bad local minimum of the optimization landscape, since it is unable to move the biases far away from their initialization at zero. It turns out that in these cases, the found network essentially performs linear regression even if the target function is non-linear. We further provide numerical evidence that this happens in practical situations, for some multi-dimensional distributions and that stochastic gradient descent exhibits similar behavior. We also provide empirical results on how the choice of initialization and optimizer can influence this behavior.
引用
收藏
页数:82
相关论文
共 50 条
[11]  
Chen Zixiang, 2021, INT C LEARNING REPRE
[12]  
Chizat L., 2019, NEURAL INFORM PROCES
[13]  
Devroye L, 1996, PROBABILISTIC THEORY, V31, DOI 10.1007/978-1-4612-0711-5
[14]  
Diederik K., 2015, INT C LEARN REPR ICL
[15]  
Du S., 2019, 36th International Conference on Machine Learning, ICML 2019, 2019-June, P3003
[16]  
Du Simon S, 2018, INT C LEARNING REPRE
[17]   STRONG UNIVERSAL CONSISTENCY OF NEURAL-NETWORK CLASSIFIERS [J].
FARAGO, A ;
LUGOSI, G .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (04) :1146-1151
[18]   Local minima and plateaus in hierarchical structures of multilayer perceptrons [J].
Fukumizu, K ;
Amari, S .
NEURAL NETWORKS, 2000, 13 (03) :317-327
[19]  
Ge R, 2019, Arxiv, DOI arXiv:1909.11837
[20]  
Golub G. H., 2013, Matrix Computations, V4th, DOI [10.56021/9781421407944, DOI 10.56021/9781421407944]