On the universal approximation theorem of fuzzy neural networks with random membership function parameters

被引:0
作者
Wang, LP
Liu, B
Wan, CR
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[2] Xiangtan Univ, Coll Informat Engn, Xiangtan, Hunan, Peoples R China
来源
ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS | 2005年 / 3496卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lowe [1] proposed that the kernel parameters of a radial basis function (RBF) neural network may first be fixed and the weights of the output layer can then be determined by pseudo-inverse. Jang, Sun, and Mizutani (p.342 [2]) pointed out that this type of two-step training methods can also be used in fuzzy neural networks (FNNs). By extensive computer simulations, we [3] demonstrated that an FNN with randomly fixed membership function parameters (FNN-RM) has faster training and better generalization in comparison to the classical FNN. To provide a theoretical basis for the FNN-RM, we present an intuitive proof of the universal approximation ability of the FNN-RM in this paper, based on the orthogonal set theory proposed by Kaminski and Strumillo for RBF neural networks [4].
引用
收藏
页码:45 / 50
页数:6
相关论文
共 19 条