A gradient-based learning method with smoothing group L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization for interval perceptron and interval weightsA gradient-based learning method with smoothing group L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document}...Y. Liu et al.

被引:0
|
作者
Yan Liu [1 ]
Jinru Cui [2 ]
Rui Wang [1 ]
Yuanquan Liu [3 ]
Jian Li [1 ]
机构
[1] Dalian Polytechnic University,School of Information Science and Engineering
[2] Dalian Polytechnic University,Department of Basic Courses Teaching
[3] Qingdao Institute of Technology,School of Information Engineering
关键词
Smoothing approximation; Group ; regularization; Interval perceptron; Interval weights; Convergence; 68W01;
D O I
10.1007/s40314-025-03180-4
中图分类号
学科分类号
摘要
As the simplest structure of interval neural networks (INNs), the single-layer interval perceptron (SIP) has the advantages of uncomplicated structure and fast computation, making it well-suited for handling various uncertain data. While L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization yields the sparsest solution among all Ln\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_n$$\end{document} regularization methods, optimizing L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization poses a challenge as it is an NP-hard problem. Therefore, L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization is approximated using smoothing functions. The incorporation of smoothing Group L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization retains the sparse solution characteristics of L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization and effectively resolves its NP-hard problem. Building upon the aforementioned content, a modified learning algorithm based on smoothing Group L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document} regularization for interval perceptron with interval weights (MIPSGL0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_0$$\end{document}) is proposed, where the interval perceptron take real numbers as inputs, weights and outputs are represented as intervals. The radius of each interval weight is expressed through a quadratic term rather than an absolute value function, ensuring a positive radius and preventing oscillations phenomenon. The monotonicity, the strong and weak convergence of the proposed algorithm is rigorously demonstrated under moderate assumptions. Moreover, experimental results on one-class approximation and one-class classification simulations reveal that the proposed algorithm exhibits superior performance in terms of training and testing mean squared error (MSE), pruning weights and accuracy.
引用
收藏
相关论文
共 50 条