Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks

被引:1
|
作者
Senen-Cerda, Albert [1 ]
Sanders, Jaron [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
Dropout; neural networks; convergence rate; gradient flow;
D O I
10.1145/3530898
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs)-which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use {0, 1}-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.
引用
收藏
页数:53
相关论文
共 50 条
  • [1] On convergence rate of projection neural networks
    Xia, YS
    Feng, G
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2004, 49 (01) : 91 - 96
  • [2] IMPROVING DEEP NEURAL NETWORKS FOR LVCSR USING RECTIFIED LINEAR UNITS AND DROPOUT
    Dahl, George E.
    Sainath, Tara N.
    Hinton, Geoffrey E.
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8609 - 8613
  • [3] Estimate of exponential convergence rate and exponential stability for neural networks
    Yi, Z
    Heng, PA
    Fu, AWC
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1487 - 1493
  • [4] Universal Approximation in Dropout Neural Networks
    Manita, Oxana A.
    Peletier, Mark A.
    Portegies, Jacobus W.
    Sanders, Jaron
    Senen-Cerda, Albert
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [5] NORMALIZATION EFFECTS ON SHALLOW NEURAL NETWORKS AND RELATED ASYMPTOTIC EXPANSIONS
    Yu, Jiahui
    Spiliopoulos, Konstantinos
    FOUNDATIONS OF DATA SCIENCE, 2021, 3 (02): : 151 - 200
  • [6] Convergence rates for shallow neural networks learned by gradient descent
    Braun, Alina
    Kohler, Michael
    Langer, Sophie
    Walk, Harro
    BERNOULLI, 2024, 30 (01) : 475 - 502
  • [7] Checkerboard Dropout: A Structured Dropout With Checkerboard Pattern for Convolutional Neural Networks
    Nguyen, Khanh-Binh
    Choi, Jaehyuk
    Yang, Joon-Sung
    IEEE ACCESS, 2022, 10 : 76044 - 76054
  • [8] Convergence rate of Artificial Neural Networks for estimation in software
    Rankovic, Dragica
    Rankovic, Nevena
    Ivanovic, Mirjana
    Lazic, Ljubomir
    INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 138
  • [9] Robustness of convergence in finite time for linear programming neural networks
    Di Marco, M
    Forti, M
    Grazzini, M
    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, 2006, 34 (03) : 307 - 316
  • [10] Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
    Bah, Bubacarr
    Rauhut, Holger
    Terstiege, Ulrich
    Westdickenberg, Michael
    INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2022, 11 (01) : 307 - 353