Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma-Pi-Sigma neural networks

被引:15
|
作者
Liu, Yan [1 ,4 ]
Li, Zhengxue [2 ]
Yang, Dakun [3 ]
Mohamed, Kh. Sh. [2 ]
Wang, Jing [4 ]
Wu, Wei [2 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] Sun Yat Sen Univ, Sch Informat Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
[4] Dalian Univ Technol, Sch Elect & Informat Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
Sigma-Pi-Sigma neural networks; Batch gradient learning algorithm; Convergence; Smoothing L-1/2 regularization; PENALTY;
D O I
10.1016/j.neucom.2014.09.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sigma-Pi-Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L-1/2 regularizer is very useful and efficient, and can be taken as a representative of all the L-q(0 < q < 1) regularizers. However, the nonsmoothness of L-1/2 regulaiization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:333 / 341
页数:9
相关论文
共 49 条
  • [2] A new Sigma-Pi-Sigma neural network based on L1 and L2 regularization and applications
    Jiao, Jianwei
    Su, Keqin
    AIMS MATHEMATICS, 2024, 9 (03): : 5995 - 6012
  • [3] Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
    Kang, Qian
    Fan, Qinwei
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2021, 553 (553) : 66 - 82
  • [4] Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks
    Ramchoun, Hassan
    Ettaouil, Mohamed
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (03) : 269 - 278
  • [5] Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
    Wu, Wei
    Fan, Qinwei
    Zurada, Jacek M.
    Wang, Jian
    Yang, Dakun
    Liu, Yan
    NEURAL NETWORKS, 2014, 50 : 72 - 78
  • [6] Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks
    Zhang, Chao
    Wu, Wei
    Xiong, Yan
    NEURAL PROCESSING LETTERS, 2007, 26 (03) : 177 - 189
  • [7] Convergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks
    Chao Zhang
    Wei Wu
    Yan Xiong
    Neural Processing Letters, 2007, 26 : 177 - 189
  • [8] Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
    Fan, Qinwei
    Wu, Wei
    Zurada, Jacek M.
    SPRINGERPLUS, 2016, 5
  • [9] L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis
    Yang, Dakun
    Liu, Yan
    NEUROCOMPUTING, 2018, 272 : 122 - 129
  • [10] Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty
    Fan, Qinwei
    Zurada, Jacek M.
    Wu, Wei
    NEUROCOMPUTING, 2014, 131 : 208 - 216