Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation

被引:41
作者
Oostwal, Elisa [1 ]
Straat, Michiel [1 ]
Biehl, Michael [1 ]
机构
[1] Univ Groningen, Bernoulli Inst Math Comp Sci & Artificial Intelli, Nijenborgh 9, NL-9747 AG Groningen, Netherlands
关键词
Neural networks; Machine learning; Statistical physics; STATISTICAL-MECHANICS; PHASE-TRANSITIONS; APPROXIMATION; DYNAMICS; PHYSICS;
D O I
10.1016/j.physa.2020.125517
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
By applying concepts from the statistical physics of learning, we study layered neural networks of rectified linear units (ReLU). The comparison with conventional, sigmoidal activation functions is in the center of interest. We compute typical learning curves for large shallow networks with K hidden units in matching student teacher scenarios. The systems undergo phase transitions, i.e. sudden changes of the generalization performance via the process of hidden unit specialization at critical sizes of the training set. Surprisingly, our results show that the training behavior of ReLU networks is qualitatively different from that of networks with sigmoidal activations. In networks with K >= 3 sigmoidal hidden units, the transition is discontinuous: Specialized network configurations co-exist and compete with states of poor performance even for very large training sets. On the contrary, the use of ReLU activations results in continuous transitions for all K. For large enough training sets, two competing, differently specialized states display similar generalization abilities, which coincide exactly for large hidden layers in the limit K -> infinity. Our findings are also confirmed in Monte Carlo simulations of the training processes. (C) 2020 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:14
相关论文
共 55 条
[1]  
Advani M., 2019, ADV NEURAL INFORM PR, P6981
[2]   Statistical physics and practical training of soft-committee machines [J].
Ahr, M ;
Biehl, M ;
Urbanczik, R .
EUROPEAN PHYSICAL JOURNAL B, 1999, 10 (03) :583-588
[3]  
Angelov P, 2016, P 24 EUR S ART NEUR, P489
[4]  
[Anonymous], 2014, 2 INT C LEARN REPR I
[5]  
[Anonymous], 2003, HDB BRAIN THEORY NEU
[6]  
[Anonymous], 2019, ARXIV190911500
[7]   The committee machine: computational to statistical gaps in learning a two-layers neural network [J].
Aubin, Benjamin ;
Maillard, Antoine ;
Barbier, Jean ;
Krzakala, Florent ;
Macris, Nicolas ;
Zdeborova, Lenka .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2019, 2019 (12)
[8]   Statistical Mechanics of Deep Learning [J].
Bahri, Yasaman ;
Kadmon, Jonathan ;
Pennington, Jeffrey ;
Schoenholz, Sam S. ;
Sohl-Dickstein, Jascha ;
Ganguli, Surya .
ANNUAL REVIEW OF CONDENSED MATTER PHYSICS, VOL 11, 2020, 2020, 11 :501-528
[9]   Properties of the Geometry of Solutions and Capacity of Multilayer Neural Networks with Rectified Linear Unit Activations [J].
Baldassi, Carlo ;
Malatesta, Enrico M. ;
Zecchina, Riccardo .
PHYSICAL REVIEW LETTERS, 2019, 123 (17)
[10]   Transient dynamics of on-line learning in two-layered neural networks [J].
Biehl, M ;
Riegler, P ;
Wohler, C .
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1996, 29 (16) :4769-4780