Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma-Pi-Sigma neural networks

被引:15
|
作者
Liu, Yan [1 ,4 ]
Li, Zhengxue [2 ]
Yang, Dakun [3 ]
Mohamed, Kh. Sh. [2 ]
Wang, Jing [4 ]
Wu, Wei [2 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] Sun Yat Sen Univ, Sch Informat Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
[4] Dalian Univ Technol, Sch Elect & Informat Engn, Dalian 116024, Peoples R China
基金
中国国家自然科学基金;
关键词
Sigma-Pi-Sigma neural networks; Batch gradient learning algorithm; Convergence; Smoothing L-1/2 regularization; PENALTY;
D O I
10.1016/j.neucom.2014.09.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sigma-Pi-Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L-1/2 regularizer is very useful and efficient, and can be taken as a representative of all the L-q(0 < q < 1) regularizers. However, the nonsmoothness of L-1/2 regulaiization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:333 / 341
页数:9
相关论文
共 49 条
  • [31] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [32] Smooth Group L1/2 Regularization for Pruning Convolutional Neural Networks
    Bao, Yuan
    Liu, Zhaobin
    Luo, Zhongxuan
    Yang, Sibo
    SYMMETRY-BASEL, 2022, 14 (01):
  • [33] Structure Optimization of Neural Networks with L1 Regularization on Gates
    Chang, Qin
    Wang, Junze
    Zhang, Huaqing
    Shi, Lina
    Wang, Jian
    Pal, Nikhil R.
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 196 - 203
  • [34] Smooth group L1/2 regularization for input layer of feedforward neural networks
    Li, Feng
    Zurada, Jacek M.
    Wu, Wei
    NEUROCOMPUTING, 2018, 314 : 109 - 119
  • [35] A Modified High-Order Neural Network with Smoothing L1 Regularization and Momentum Terms
    Mohamed, Khidir Shaib
    Suliman, Ibrhim M. A.
    Alfeel, Mahmoud I.
    Alhalangy, Abdalilah
    Almostafa, Faiza A.
    Adam, Ekram
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (05)
  • [36] Feature Selection Using Smooth Gradient L1/2 Regularization
    Gao, Hongmin
    Yang, Yichen
    Zhang, Bingyin
    Li, Long
    Zhang, Huaqing
    Wu, Shujun
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 160 - 170
  • [37] Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks
    Zhang, Huisheng
    Zhang, Ying
    Zhu, Shuai
    Xu, Dongpo
    NEUROCOMPUTING, 2020, 407 : 185 - 193
  • [38] Sparse smooth group L0°L1/2 regularization method for convolutional neural networks
    Quasdane, Mohamed
    Ramchoun, Hassan
    Masrour, Tawfik
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [39] SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L1/2 REGULARIZATION
    Li, F.
    Zurada, J. M.
    Wu, W.
    NEURAL NETWORK WORLD, 2018, 28 (02) : 133 - 147
  • [40] Convergence of a Gradient-Based Learning Algorithm With Penalty for Ridge Polynomial Neural Networks
    Fan, Qinwei
    Peng, Jigen
    Li, Haiyang
    Lin, Shoujin
    IEEE ACCESS, 2021, 9 : 28742 - 28752