Regularization and feedforward artificial neural network training with noise

被引:0
作者
Chandra, P [1 ]
Singh, Y [1 ]
机构
[1] Indraprastha Univ, GGS, Sch Informat Technol, Kashmere Gate, Delhi 110006, India
来源
PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4 | 2003年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Regularization is a method used for controlling the complexity of models. Explicit regularization uses a modifier term, incorporating a-priori knowledge about the function to be approximated by Feedforward Artificial Networks, that is added to the risk functional and implicit regularization where noise is added to the system variables during training, are two of the commonly used techniques for model complexity control. The relationship between these two type of regularization is explained. A regularization term is derived based on the general noise model. The interplay between the various noise mediated regularization terms is described.
引用
收藏
页码:2366 / +
页数:2
相关论文
共 33 条