On the (1+1/2) layer neural networks as universal approximators

被引:0
|
作者
Ciuca, I [1 ]
Ware, JA [1 ]
机构
[1] Res Inst Informat, Bucharest, Romania
来源
IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE | 1998年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Paper deals with the approximation of continuous functions by feedforward neural networks. After presenting one of the main results of Ito, the paper tries to get a universal approximator implementable as (1+1/2) layer neural network using Hcaviside function as univariate functions. it presents an explicit formula for function approximation implementable as a three-layer feedforward neural network instead of a four-layer neural networks. These three-layer feedforward neural networks have the same number of neurons in the hidden layer as the equivalent four-layer neural networks have in the second hidden layer.
引用
收藏
页码:1218 / 1223
页数:6
相关论文
共 50 条