A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi-Sugeno fuzzy models

被引:18
|
作者
Liu, Yan [1 ]
Wu, Wei [2 ]
Fan, Qinwei [2 ]
Yang, Dakun [2 ]
Wang, Jian [3 ]
机构
[1] Dalian Polytech Univ, Sch Informat Sci & Engn, Dalian 116034, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] China Univ Petr, Sch Math & Computat Sci, Dongying 257061, Peoples R China
基金
中国国家自然科学基金;
关键词
Takagi-Sugeno (T-S) fuzzy models; Gradient descent method; Convergence; Gaussian-type membership function; Variable selection; Regularizer; IDENTIFICATION; NETWORKS; SYSTEMS;
D O I
10.1016/j.neucom.2014.01.041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A popular and feasible approach to determine the appropriate size of a neural network is to remove unnecessary connections from an oversized network. The advantage of L-1/2 regularization has been recognized for sparse modeling. However, the nonsmoothness of L-1/2 regularization may lead to oscillation phenomenon. An approach with smoothing L-1/2 regularization is proposed in this paper for Takagi-Sugeno (T-S) fuzzy models, in order to improve the learning efficiency and to promote sparsity of the models. The new smoothing L-1/2 regularizer removes the oscillation. Besides, it also enables us to prove the weak and strong convergence results for T-S fuzzy neural networks with zero-order. Furthermore, a relationship between the learning rate parameter and the penalty parameter is given to guarantee the convergence. Simulation results are provided to support the theoretical findings, and they show the superiority of the smoothing L-1/2 regularization over the original L-1/2 regularization. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:229 / 237
页数:9
相关论文
共 41 条
  • [31] L1/2 Regularization: Convergence of Iterative Half Thresholding Algorithm
    Zeng, Jinshan
    Lin, Shaobo
    Wang, Yao
    Xu, Zongben
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (09) : 2317 - 2329
  • [32] SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L1/2 REGULARIZATION
    Li, F.
    Zurada, J. M.
    Wu, W.
    NEURAL NETWORK WORLD, 2018, 28 (02) : 133 - 147
  • [33] Solving the Formation and Containment Control Problem of Nonlinear Multi-Boiler Systems Based on Interval Type-2 Takagi-Sugeno Fuzzy Models
    Lin, Yann-Horng
    Chang, Wen-Jer
    Ku, Cheung-Chieh
    PROCESSES, 2022, 10 (06)
  • [34] A pruning algorithm with relaxed conditions for high-order neural networks based on smoothing group L1/2 regularization and adaptive momentum
    Kang, Qian
    Fan, Qinwei
    Zurada, Jacek M.
    Huang, Tingwen
    KNOWLEDGE-BASED SYSTEMS, 2022, 257
  • [35] A pruning algorithm with L1/2 regularizer for extreme learning machine
    Ye-tian FAN
    Wei WU
    Wen-yu YANG
    Qin-wei FAN
    Jian WANG
    Frontiers of Information Technology & Electronic Engineering, 2014, 15 (02) : 119 - 125
  • [36] Iterative L1/2 Regularization Algorithm for Variable Selection in the Cox Proportional Hazards Model
    Liu, Cheng
    Liang, Yong
    Luan, Xin-Ze
    Leung, Kwong-Sak
    Chan, Tak-Ming
    Xu, Zong-Ben
    Zhang, Hai
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2012, PT II, 2012, 7332 : 11 - 17
  • [37] A pruning algorithm with L1/2 regularizer for extreme learning machine
    Ye-tian Fan
    Wei Wu
    Wen-yu Yang
    Qin-wei Fan
    Jian Wang
    Journal of Zhejiang University SCIENCE C, 2014, 15 : 119 - 125
  • [38] A pruning feedforward small-world neural network by dynamic sparse regularization with smoothing l1/2 norm for nonlinear system modeling
    Li, Wenjing
    Chu, Minghui
    APPLIED SOFT COMPUTING, 2023, 136
  • [39] Regression and Multiclass Classification Using Sparse Extreme Learning Machine via Smoothing Group L1/2 Regularizer
    Fan, Qinwei
    Niu, Lei
    Kang, Qian
    IEEE ACCESS, 2020, 8 : 191482 - 191494
  • [40] Overview of two-norm (L2) and one-norm (L1) Tikhonov regularization variants for full wavelength or sparse spectral multivariate calibration models or maintenance
    Kalivas, John H.
    JOURNAL OF CHEMOMETRICS, 2012, 26 (06) : 218 - 230