Cross-Situational Learning with Reservoir Computing for Language Acquisition Modelling

被引:6
|
作者
Juven, Alexis [1 ,2 ,3 ]
Hinaut, Xavier [1 ,2 ,3 ]
机构
[1] INRIA Bordeaux Sud Ouest, Bordeaux, France
[2] Bordeaux INP, LaBRI, CNRS, UMR 5800, Bordeaux, France
[3] Univ Bordeaux, CNRS, UMR 5293, Inst Malad Neurodegenerat, Bordeaux, France
来源
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2020年
关键词
Recurrent Neural Networks; Reservoir Computing; Echo State Networks; Language Learning; Cross-situational Learning; Unsupervised Learning; Language Acquisition;
D O I
10.1109/ijcnn48605.2020.9206650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding the mechanisms enabling children to learn rapidly word-to-meaning mapping through cross-situational learning in uncertain conditions is still a matter of debate. In particular, many models simply look at the word level, and not at the full sentence comprehension level. We present a model of language acquisition, applying cross-situational learning on Recurrent Neural Networks with the Reservoir Computing paradigm. Using the co-occurrences between words and visual perceptions, the model learns to ground a complex sentence, describing a scene involving different objects, into a perceptual representation space. The model processes sentences describing scenes it perceives simultaneously via a simulated vision module: sentences are inputs and simulated vision are target outputs of the RNN. Evaluations of the model show its capacity to extract the semantics of virtually hundred of thousands possible combinations of sentences (based on a context-free grammar); remarkably the model generalises only after a few hundred of partially described scenes via cross-situational learning. Furthermore, it handles polysemous and synonymous words, and deals with complex sentences where word order is crucial for understanding. Finally, further improvements of the model are discussed in order to reach proper reinforced and self-supervised learning schemes, with the goal to enable robots to acquire and ground language by them-selves (with no oracle supervision).
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Cross-Situational Word Learning in Two Foreign Languages: Effects of Native Language and Perceptual Difficulty
    Tuninetti, Alba
    Mulak, Karen E.
    Escudero, Paola
    FRONTIERS IN COMMUNICATION, 2020, 5
  • [32] Cross-Situational Learning: An Experimental Study of Word-Learning Mechanisms
    Smith, Kenny
    Smith, Andrew D. M.
    Blythe, Richard A.
    COGNITIVE SCIENCE, 2011, 35 (03) : 480 - 498
  • [33] Gavagai Is as Gavagai Does: Learning Nouns and Verbs From Cross-Situational Statistics
    Monaghan, Padraic
    Mattock, Karen
    Davies, Robert A. I.
    Smith, Alastair C.
    COGNITIVE SCIENCE, 2015, 39 (05) : 1099 - 1112
  • [34] The Active Role of Partial Knowledge in Cross-Situational Word Learning
    Yurovsky, Daniel
    Fricker, Damian
    Yu, Chen
    Smith, Linda B.
    COGNITION IN FLUX, 2010, : 2609 - 2614
  • [35] Combining statistics: the role of phonotactics on cross-situational word learning
    Rodrigo Dal Ben
    Débora de Hollanda Souza
    Jessica F. Hay
    Psicologia: Reflexão e Crítica, 35
  • [36] Exploring the Robustness of Cross-Situational Learning Under Zipfian Distributions
    Vogt, Paul
    COGNITIVE SCIENCE, 2012, 36 (04) : 726 - 739
  • [37] Language Acquisition with Echo State Networks: Towards Unsupervised Learning
    Thanh Trung Dinh
    Hinaut, Xavier
    10TH IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB 2020), 2020,
  • [38] Cross-Situational Learning of Phonologically Overlapping Words Across Degrees of Ambiguity
    Mulak, Karen E.
    Vlach, Haley A.
    Escudero, Paola
    COGNITIVE SCIENCE, 2019, 43 (05)
  • [39] Subjective confidence influences word learning in a cross-situational statistical learning task
    Dautriche, Isabelle
    Rabagliati, Hugh
    Smith, Kenny
    JOURNAL OF MEMORY AND LANGUAGE, 2021, 121
  • [40] Cross-situational learning of object-word mapping using Neural Modeling Fields
    Fontanari, Jose F.
    Tikhanoff, Vadim
    Cangelosi, Angelo
    Ilin, Roman
    Perlovsky, Leonid I.
    NEURAL NETWORKS, 2009, 22 (5-6) : 579 - 585