Constrained-learning in Artificial Neural Networks

被引:0
作者
Parra-Hernández, R [1 ]
机构
[1] Univ Victoria, Dept Elect & Comp Engn, Lab Parallel & Intelligent Syst, LAPIS, Victoria, BC V8W 2Y2, Canada
来源
2003 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS, AND SIGNAL PROCESSING, VOLS 1 AND 2, CONFERENCE PROCEEDINGS | 2003年
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The capacity to generalize is the most important characteristic in neural networks. However, the generalization capacity is lost when over-fitting occurs during the neural network training process; i.e., although the error after the training process is very small, when new data is presented to the neural network the error is large. An approach aiming to improve the neural network generalization capacity is presented in this work.
引用
收藏
页码:352 / 355
页数:4
相关论文
共 10 条
[1]  
BIGGS M, 1975, CONSTRAINED OPTIMIZA
[2]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993
[3]  
HAN SP, 1977, J OPTIMIZATION THEOR, V22
[4]  
HUNT KT, 1992, AUTOMATICA, V28
[5]  
LU CC, 1992, IEEE T CONSUMER ELEC, V8
[6]  
MILLER WT, 1991, NEURAL NETWORKS CONT
[7]  
NARENDRA KS, 1991, IEEE T NEURAL NETWOR, V2
[8]  
POWELL M, 1978, NUMERICAL ANAL NOTES, V630
[9]  
RICOMARTINEZ R, 1994, THESIS PRINCETON U
[10]  
WIDROW B, 1988, COMPUTER, V21