共 62 条
Orthogonal Super Greedy Learning for Sparse Feedforward Neural Networks
被引:2
作者:
Xu, Lin
[1
]
Cao, Xiangyong
[2
]
Yao, Jing
[2
]
Yan, Zheng
[1
]
机构:
[1] Shanghai Em Data Technol Co Ltd, Inst Artificial Intelligence, Shanghai 200000, Peoples R China
[2] Xi An Jiao Tong Univ, Inst Informat & Syst Sci, Xian 710049, Peoples R China
来源:
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING
|
2022年
/
9卷
/
01期
关键词:
Feedforward neural networks;
Dictionaries;
Neurons;
Radial basis function networks;
Biological neural networks;
Machine learning;
Learning systems;
Computational complexity;
feature selection;
generalization ability;
model compression;
neural networks;
APPROXIMATION;
ALGORITHM;
CONVERGENCE;
DESIGN;
D O I:
10.1109/TNSE.2020.3033418
中图分类号:
T [工业技术];
学科分类号:
08 ;
摘要:
The analytic approaches for feedforward neural network, e.g., Radial Basis Function (RBF), have some attractive characteristics, such as superior theoretical properties and faster numerical implementations. However, they still have several drawbacks. The primary defect is that their generation performance and computational complexity are susceptible to the influence of irrelevant hidden variables. Thus, how to alleviate this influence has become a crucial issue for the feedforward neural network. In this paper, we propose an Orthogonal Super Greedy learning (OSGL) method for hidden neurons selection. The OSGL selects more than one hidden neurons from a given network structure in a greedy strategy until an adequate sparse network has been constructed. Theoretical analyses show it can reach the optimal learning rate. Extensive empirical results demonstrate the superiority that the proposed method can produce an excellent generalization performance with a sparse and compact feature representation within feedforward networks.
引用
收藏
页码:161 / 170
页数:10
相关论文