Convergence of online gradient method for pi-sigma neural networks

被引:0
作者
Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, China [1 ]
不详 [2 ]
机构
来源
J. Comput. Inf. Syst. | 2007年 / 6卷 / 2345-2352期
关键词
Neural networks;
D O I
暂无
中图分类号
学科分类号
摘要
引用
收藏
相关论文
共 50 条
  • [21] A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
    Jentzen, Arnulf
    Riekert, Adrian
    [J]. ZEITSCHRIFT FUR ANGEWANDTE MATHEMATIK UND PHYSIK, 2022, 73 (05):
  • [22] CONVERGENCE TIME ON THE RS MODEL FOR NEURAL NETWORKS
    Penna, T. J. P.
    de Oliveira, P. M. C.
    Arenzon, J. J.
    de Almeida, R. M. C.
    Iglesias, J. R.
    [J]. INTERNATIONAL JOURNAL OF MODERN PHYSICS C, 1991, 2 (03): : 711 - 717
  • [23] Convergence of discrete delayed Hopfield neural networks
    Ma, Runnian
    Xie, Yu
    Zhang, Shengrui
    Liu, Wenbin
    [J]. COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2009, 57 (11-12) : 1869 - 1876
  • [24] Interpolation and rates of convergence for a class of neural networks
    Cao, Feilong
    Zhang, Yongquan
    He, Ze-Rong
    [J]. APPLIED MATHEMATICAL MODELLING, 2009, 33 (03) : 1441 - 1456
  • [25] Variable Order Fractional Gradient Descent Method and Its Application in Neural Networks Optimization
    Lou, Weipu
    Gao, Wei
    Han, Xianwei
    Zhang, Yimin
    [J]. 2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 109 - 114
  • [26] A new spectral conjugate gradient method for unconstrained optimization and its application in neural networks
    Abdulrahman, Asmaa M.
    Fathi, Bayda G.
    Najm, Huda Y.
    [J]. JOURNAL OF MATHEMATICS AND COMPUTER SCIENCE-JMCS, 2025, 36 (03): : 326 - 332
  • [27] Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks
    Neftci, Emre O.
    Mostafa, Hesham
    Zenke, Friedemann
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2019, 36 (06) : 51 - 63
  • [28] Weight and Gradient Centralization in Deep Neural Networks
    Fuhl, Wolfgang
    Kasneci, Enkelejda
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 227 - 239
  • [29] A Hessian-Free Gradient Flow (HFGF) method for the optimisation of deep learning neural networks
    Zhang, Sushen
    Chen, Ruijuan
    Du, Wenyu
    Yuan, Ye
    Vassiliadis, Vassilios S.
    [J]. COMPUTERS & CHEMICAL ENGINEERING, 2020, 141
  • [30] Study on fast speed fractional order gradient descent method and its application in neural networks
    Wang, Yong
    He, Yuli
    Zhu, Zhiguang
    [J]. NEUROCOMPUTING, 2022, 489 : 366 - 376