When unlearning helps

被引:26
作者
Baliga, Ganesh [2 ]
Case, John [3 ]
Merkle, Wolfgang [4 ]
Stephan, Frank [1 ]
Wiehagen, Rolf [5 ]
机构
[1] Natl Univ Singapore, Dept Math & Comp Sci, Singapore 117543, Singapore
[2] Rowan Univ, Glassboro, NJ 08028 USA
[3] Univ Delaware, Newark, DE 19716 USA
[4] Univ Heidelberg, Inst Informat, D-69120 Heidelberg, Germany
[5] Tech Univ Kaiserslautern, Fachbereich Informat, D-67653 Kaiserslautern, Germany
基金
美国国家科学基金会;
关键词
computational learning theory; cognitive science; inductive inference of grammars for languages from positive data;
D O I
10.1016/j.ic.2007.10.005
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Overregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem posed in 1986 by Osherson, Stob and Weinstein. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non-U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses. The situation regarding learning power for non-U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non-U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non-U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation. (c) 2007 Elsevier Inc. All rights reserved.
引用
收藏
页码:694 / 709
页数:16
相关论文
共 35 条
  • [1] ANGULIN D, 1980, INFORM CONTR, V45, P117
  • [2] Baliga G, 2000, LECT NOTES COMPUT SC, V1853, P844
  • [3] LANGUAGE-LEARNING WITH SOME NEGATIVE INFORMATION
    BALIGA, G
    CASE, J
    JAIN, S
    [J]. JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 1995, 51 (02) : 273 - 285
  • [4] Bower T., 1978, P 21 INT C PSYCH PRE
  • [5] CAREY S, 1982, U SHAPED BEHAV GROWT
  • [6] Carlucci L, 2005, LECT NOTES ARTIF INT, V3734, P241
  • [7] Memory-limited U-shaped learning
    Carlucci, Lorenzo
    Case, John
    Jain, Sanjay
    Stephan, Frank
    [J]. LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 244 - 258
  • [8] Variations on U-shaped learning
    Carlucci, Lorenzo
    Jain, Sanjay
    Kinber, Efim
    Stephan, Frank
    [J]. INFORMATION AND COMPUTATION, 2006, 204 (08) : 1264 - 1294
  • [9] The power of vacillation in language learning
    Case, J
    [J]. SIAM JOURNAL ON COMPUTING, 1999, 28 (06) : 1941 - 1969
  • [10] CASE J, 1982, LECT NOTES COMPUT SC, V140, P107