Variations on U-shaped learning

被引:2
作者
Carlucci, L [1 ]
Jain, SA
Kinber, E
Stephan, F
机构
[1] Univ Delaware, Dept Comp & Informat Sci, Newark, DE 19716 USA
[2] Univ Siena, Dipartimento Matemat, I-53100 Siena, Italy
[3] Natl Univ Singapore, Sch Comp, Singapore 117543, Singapore
[4] Sacred Heart Univ, Dept Comp Sci, Fairfield, CT 06432 USA
[5] Natl Univ Singapore, Dept Math, Singapore 117543, Singapore
来源
LEARNING THEORY, PROCEEDINGS | 2005年 / 3559卷
关键词
D O I
10.1007/11503415_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [2, 6, 8, 20, 24] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: TxtEx-learning - when a learner stabilizes on a correct conjecture, and TxtBc-learning - when a learner stabilizes on a sequence of grammars representing the target concept. In all cases, we show that, surprisingly, returning to wrong conjectures is sometimes necessary to achieve full power of learning. On the other hand it is not necessary to return to old "overgeneralizing" conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of correct grammars. In this case we show that both returning to old wrong conjectures and returning to old "overgeneralizing" conjectures is necessary for full learning power. We also show that, surprisingly, learners consistent with the input seen so far can be made decisive [2, 21] - they do not have to return to any old conjectures - wrong or right.
引用
收藏
页码:382 / 397
页数:16
相关论文
共 27 条