Cross-Situational Learning with Reservoir Computing for Language Acquisition Modelling

被引:6
|
作者
Juven, Alexis [1 ,2 ,3 ]
Hinaut, Xavier [1 ,2 ,3 ]
机构
[1] INRIA Bordeaux Sud Ouest, Bordeaux, France
[2] Bordeaux INP, LaBRI, CNRS, UMR 5800, Bordeaux, France
[3] Univ Bordeaux, CNRS, UMR 5293, Inst Malad Neurodegenerat, Bordeaux, France
来源
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2020年
关键词
Recurrent Neural Networks; Reservoir Computing; Echo State Networks; Language Learning; Cross-situational Learning; Unsupervised Learning; Language Acquisition;
D O I
10.1109/ijcnn48605.2020.9206650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding the mechanisms enabling children to learn rapidly word-to-meaning mapping through cross-situational learning in uncertain conditions is still a matter of debate. In particular, many models simply look at the word level, and not at the full sentence comprehension level. We present a model of language acquisition, applying cross-situational learning on Recurrent Neural Networks with the Reservoir Computing paradigm. Using the co-occurrences between words and visual perceptions, the model learns to ground a complex sentence, describing a scene involving different objects, into a perceptual representation space. The model processes sentences describing scenes it perceives simultaneously via a simulated vision module: sentences are inputs and simulated vision are target outputs of the RNN. Evaluations of the model show its capacity to extract the semantics of virtually hundred of thousands possible combinations of sentences (based on a context-free grammar); remarkably the model generalises only after a few hundred of partially described scenes via cross-situational learning. Furthermore, it handles polysemous and synonymous words, and deals with complex sentences where word order is crucial for understanding. Finally, further improvements of the model are discussed in order to reach proper reinforced and self-supervised learning schemes, with the goal to enable robots to acquire and ground language by them-selves (with no oracle supervision).
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Minimal model of associative learning for cross-situational lexicon acquisition
    Tilles, Paulo F. C.
    Fontanari, Jose F.
    JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2012, 56 (06) : 396 - 403
  • [2] Goldilocks Forgetting in Cross-Situational Learning
    Ibbotson, Paul
    Lopez, Diana G.
    McKane, Alan J.
    FRONTIERS IN PSYCHOLOGY, 2018, 9
  • [3] Lingodroids: Cross-Situational Learning for Episodic Elements
    Heath, Scott
    Ball, David
    Wiles, Janet
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2016, 8 (01) : 3 - 14
  • [4] Cross-situational learning in a Zipfian environment
    Hendrickson, Andrew T.
    Perfors, Amy
    COGNITION, 2019, 189 : 11 - 22
  • [5] Cross-situational word learning in aphasia
    Penaloza, Claudia
    Mirman, Daniel
    Cardona, Pedro
    Juncadella, Montserrat
    Martin, Nadine
    Laine, Matti
    Rodriguez-Fornells, Antoni
    CORTEX, 2017, 93 : 12 - 27
  • [6] A Joint Model of Word Segmentation and Meaning Acquisition Through Cross-Situational Learning
    Rasanen, Okko
    Rasilo, Heikki
    PSYCHOLOGICAL REVIEW, 2015, 122 (04) : 792 - 829
  • [7] Simultaneous Cross-situational Learning of Category and Object Names
    Gangwani, Tarun
    Kachergis, George
    Yu, Chen
    COGNITION IN FLUX, 2010, : 1595 - 1600
  • [8] Learning vocabulary and grammar from cross-situational statistics
    Rebuschat, Patrick
    Monaghan, Padraic
    Schoetensack, Christine
    COGNITION, 2021, 206
  • [9] Cross-situational and supervised learning in the emergence of communication
    Fontanari, Jose F.
    Cangelosi, Angelo
    INTERACTION STUDIES, 2011, 12 (01) : 119 - 133
  • [10] Reinforcement and inference in cross-situational word learning
    Tilles, Paulo F. C.
    Fontanari, Jose F.
    FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2013, 7