A mean field view of the landscape of two-layer neural networks

被引:424
作者
Mei, Song [1 ]
Montanari, Andrea [2 ,3 ]
Phan-Minh Nguyen [2 ]
机构
[1] Stanford Univ, Inst Computat & Math Engn, Stanford, CA 94305 USA
[2] Stanford Univ, Dept Elect Engn, Stanford, CA 94305 USA
[3] Stanford Univ, Dept Stat, Stanford, CA 94305 USA
关键词
neural networks; stochastic gradient descent; gradient flow; Wasserstein space; partial differential equations;
D O I
10.1073/pnas.1806579115
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that-in a suitable scaling limit-SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for "averaging out" some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.
引用
收藏
页码:E7665 / E7671
页数:7
相关论文
共 27 条
[1]  
[Anonymous], 1961, PRINCIPLES NEURODYNA
[2]  
[Anonymous], 2018, STAT-US
[3]  
[Anonymous], INT C LEARN REPR ICL
[4]  
[Anonymous], 2017, ARXIV171100501
[5]  
[Anonymous], 2006, Advances in neural information processing systems
[6]  
[Anonymous], P INT C LEARN REPR I
[7]  
[Anonymous], P INT C MACH LEARN I
[8]  
[Anonymous], 2018, Mean field analysis of neural networks: A law of large numbers
[9]  
[Anonymous], 2016, Deep learning. vol
[10]  
[Anonymous], 2017, ARXIV170207966