Improving Training in the Vicinity of Temporary Minima

被引:0
作者
Roth, Ido [1 ]
Margaliot, Michael [1 ]
机构
[1] Tel Aviv Univ, Sch Elec Eng Syst, IL-69978 Tel Aviv, Israel
来源
BIO-INSPIRED SYSTEMS: COMPUTATIONAL AND AMBIENT INTELLIGENCE, PT 1 | 2009年 / 5517卷
关键词
ARTIFICIAL NEURAL-NETWORKS; LOCAL MINIMA; DYNAMICS; NOISE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An important problem in learning using gradient descent algorithms (such as backprop) is the slowdown incurred by temporary minima (TM). We consider this problem for an artificial neural network trained to solve the XOR, problem. The network is transformed into the equivalent all permutations fuzzy rule-base which provides a symbolic representation of the knowledge embedded in the network. We develop a mathematical model for the evolution of the fuzzy rule-base parameters during learning in the vicinity of TM. We show that the rule-base. becomes singular and tends to remain singular in the vicinity of TM. Our analysis suggests a simple remedy for overcoming the slowdown in the learning process incurred by TM. This is based on slightly perturbing the values of the training examples, so that they are no longer symmetric. Simulations demonstrate the usefulness of this approach.
引用
收藏
页码:131 / 139
页数:9
相关论文
共 18 条