Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks

被引:1
|
作者
Senen-Cerda, Albert [1 ]
Sanders, Jaron [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
Dropout; neural networks; convergence rate; gradient flow;
D O I
10.1145/3530898
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs)-which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use {0, 1}-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.
引用
收藏
页数:53
相关论文
共 50 条
  • [21] Dropout Rademacher complexity of deep neural networks
    Gao, Wei
    Zhou, Zhi-Hua
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (07)
  • [22] Asymptotic behaviour of neural networks
    Loccufier, M
    Noldus, E
    COMPUTING ANTICIPATORY SYSTEMS: CASYS - FIRST INTERNATIONAL CONFERENCE, 1998, 437 : 643 - 659
  • [23] The Convergence of Incremental Neural Networks
    Lei Chen
    Yilin Wang
    Lixiao Zhang
    Wei Chen
    Neural Processing Letters, 2023, 55 : 12481 - 12499
  • [24] The Convergence of Incremental Neural Networks
    Chen, Lei
    Wang, Yilin
    Zhang, Lixiao
    Chen, Wei
    NEURAL PROCESSING LETTERS, 2023, 55 (09) : 12481 - 12499
  • [25] Analysis of convergence performance of neural networks ranking algorithm
    Zhang, Yongquan
    Cao, Feilong
    NEURAL NETWORKS, 2012, 34 : 65 - 71
  • [26] Batch Normalization and Dropout Regularization in Training Deep Neural Networks with Label Noise
    Rusiecki, Andrzej
    INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021, 2022, 418 : 57 - 66
  • [27] Dropout improves Recurrent Neural Networks for Handwriting Recognition
    Vu Pham
    Bluche, Theodore
    Kermorvant, Christopher
    Louradour, Jerome
    2014 14TH INTERNATIONAL CONFERENCE ON FRONTIERS IN HANDWRITING RECOGNITION (ICFHR), 2014, : 285 - 290
  • [28] CORRDROP: CORRELATION BASED DROPOUT FOR CONVOLUTIONAL NEURAL NETWORKS
    Zeng, Yuyuan
    Dai, Tao
    Xia, Shu-Tao
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3742 - 3746
  • [29] Dropout with Tabu Strategy for Regularizing Deep Neural Networks
    Ma, Zongjie
    Sattar, Abdul
    Zhou, Jun
    Chen, Qingliang
    Su, Kaile
    COMPUTER JOURNAL, 2020, 63 (07) : 1031 - 1038
  • [30] Soft Dropout Method in Training of Contextual Neural Networks
    Wolk, Krzysztof
    Palak, Rafal
    Burnell, Erik Dawid
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2020), PT II, 2020, 12034 : 353 - 363