Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks

被引:1
|
作者
Senen-Cerda, Albert [1 ]
Sanders, Jaron [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
Dropout; neural networks; convergence rate; gradient flow;
D O I
10.1145/3530898
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs)-which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use {0, 1}-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.
引用
收藏
页数:53
相关论文
共 50 条
  • [41] Convergence of discrete delayed Hopfield neural networks
    Ma, Runnian
    Xie, Yu
    Zhang, Shengrui
    Liu, Wenbin
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2009, 57 (11-12) : 1869 - 1876
  • [42] Convergence analysis of convex incremental neural networks
    Lei Chen
    Hung Keng Pung
    Annals of Mathematics and Artificial Intelligence, 2008, 52 : 67 - 80
  • [43] Interpolation and rates of convergence for a class of neural networks
    Cao, Feilong
    Zhang, Yongquan
    He, Ze-Rong
    APPLIED MATHEMATICAL MODELLING, 2009, 33 (03) : 1441 - 1456
  • [44] Convergence Rate of Distributed ADMM Over Networks
    Makhdoumi, Ali
    Ozdaglar, Asuman
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (10) : 5082 - 5095
  • [45] IMPROVING DEEP NEURAL NETWORKS FOR LVCSR USING DROPOUT AND SHRINKING STRUCTURE
    Zhang, Shiliang
    Bao, Yebo
    Zhou, Pan
    Jiang, Hui
    Dai, Lirong
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [46] Convergence analysis of convex incremental neural networks
    Chen, Lei
    Pung, Hung Keng
    ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2008, 52 (01) : 67 - 80
  • [47] EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks
    Salehinejad, Hojjat
    Valaee, Shahrokh
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5279 - 5292
  • [48] ISING DROPOUT WITH NODE GROUPING FOR TRAINING AND COMPRESSION OF DEEP NEURAL NETWORKS
    Salehinejad, Hojjat
    Wang, Zijian
    Valaee, Shahrokh
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [49] Early prediction of dropout in online courses using Artificial Neural Networks
    Aguirre Montano, Hermel Santiago
    Carmen Cabrera-Loayza, Ma.
    2020 XV CONFERENCIA LATINOAMERICANA DE TECNOLOGIAS DE APRENDIZAJE (LACLO), 2020,
  • [50] Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks
    Chen, Yuanyuan
    Yi, Zhang
    NEUROCOMPUTING, 2021, 450 : 354 - 361