Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma-Pi-Sigma neural networks

被引:15
|
作者
Liu, Yan [1 ,4 ]
Li, Zhengxue [2 ]
Yang, Dakun [3 ]
Mohamed, Kh. Sh. [2 ]
Wang, Jing [4 ]
Wu, Wei [2 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] Sun Yat Sen Univ, Sch Informat Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
[4] Dalian Univ Technol, Sch Elect & Informat Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
Sigma-Pi-Sigma neural networks; Batch gradient learning algorithm; Convergence; Smoothing L-1/2 regularization; PENALTY;
D O I
10.1016/j.neucom.2014.09.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sigma-Pi-Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L-1/2 regularizer is very useful and efficient, and can be taken as a representative of all the L-q(0 < q < 1) regularizers. However, the nonsmoothness of L-1/2 regulaiization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:333 / 341
页数:9
相关论文
共 49 条
  • [11] Relaxed conditions for convergence analysis of online back-propagation algorithm with L2 regularizer for Sigma-Pi-Sigma neural network
    Liu, Yan
    Yang, Dakun
    Zhang, Chao
    NEUROCOMPUTING, 2018, 272 : 163 - 169
  • [12] Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
    Fan, Qinwei
    Kang, Qian
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2022, 585 : 70 - 88
  • [13] The convergence analysis of SpikeProp algorithm with smoothing L1/2 regularization
    Zhao, Junhong
    Zurada, Jacek M.
    Yang, Jie
    Wu, Wei
    NEURAL NETWORKS, 2018, 103 : 19 - 28
  • [14] Convergence of Online gradient algorithm with Stochastic inputs for Pi-Sigma neural networks
    Kang, Xidai
    Xiong, Yan
    Zhang, Chao
    Wu, Wei
    2007 IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTATIONAL INTELLIGENCE, VOLS 1 AND 2, 2007, : 564 - +
  • [15] Lp approximation capabilities of sum-of-product and sigma-pi-sigma neural networks
    Long, Jinling
    wu, Wei
    Nan, Dong
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2007, 17 (05) : 419 - 424
  • [16] Performance Optimization and Interpretability of Recurrent Sigma-Pi-Sigma Neural Networks on Application of IoE Data
    Deng, Fei
    Zhang, Liqing
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (04): : 3639 - 3653
  • [17] Batch gradient training method with smoothing regularization for l0 feedforward neural networks
    Zhang, Huisheng
    Tang, Yanli
    Liu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2015, 26 (02) : 383 - 390
  • [18] A New Conjugate Gradient Method with Smoothing L1/2 Regularization Based on a Modified Secant Equation for Training Neural Networks
    Li, Wenyu
    Liu, Yan
    Yang, Jie
    Wu, Wei
    NEURAL PROCESSING LETTERS, 2018, 48 (02) : 955 - 978
  • [19] Convergence of Batch Gradient Method for Training of Pi-Sigma Neural Network with Regularizer and Adaptive Momentum Term
    Fan, Qinwei
    Liu, Le
    Kang, Qian
    Zhou, Li
    NEURAL PROCESSING LETTERS, 2023, 55 (04) : 4871 - 4888
  • [20] A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi-Sugeno fuzzy models
    Liu, Yan
    Wu, Wei
    Fan, Qinwei
    Yang, Dakun
    Wang, Jian
    NEUROCOMPUTING, 2014, 138 : 229 - 237