Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

被引:12
作者
Alamia, Andrea [1 ]
Gauducheau, Victor [1 ]
Paisios, Dimitri [1 ,2 ]
VanRullen, Rufin [1 ,3 ]
机构
[1] CNRS, CerCo, F-31055 Toulouse, France
[2] Univ Toulouse, CNRS, Lab Cognit Langues Langage Ergon, Toulouse, France
[3] Univ Toulouse, ANITI, F-31055 Toulouse, France
关键词
FINITE-STATE AUTOMATA; CONTEXT-FREE; DISTINCT MODES; IMPLICIT; LANGUAGE; KNOWLEDGE; CONNECTIONIST;
D O I
10.1038/s41598-020-79127-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can "learn" (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
引用
收藏
页数:15
相关论文
共 99 条
[1]  
Alamia A., 2019, ARXIV190204861
[2]   Unconscious associative learning with conscious cues [J].
Alamia, Andrea ;
de Xivry, Jean-Jacques Orban ;
San Anton, Estibaliz ;
Olivier, Etienne ;
Cleeremans, Axel ;
Zenon, Alexandre .
NEUROSCIENCE OF CONSCIOUSNESS, 2016, (01)
[3]  
[Anonymous], 1983, Soviet Mathematics Doklady
[4]  
[Anonymous], 2006, Cognition and Multi-Agent Interaction, DOI [DOI 10.1017/CBO9780511610721.005, 10.1017/CBO9780511610721.005]
[5]  
[Anonymous], 2015, Google Res., DOI [10.1207/s15326985ep4001, DOI 10.1207/S15326985-P4001]
[6]  
[Anonymous], 2018, JASP VERS 0 8 6 0
[7]   Delay-Induced Multistability and Loop Formation in Neuronal Networks with Spike-Timing-Dependent Plasticity [J].
Asl, Mojtaba Madadi ;
Valizadeh, Alireza ;
Tass, Peter A. .
SCIENTIFIC REPORTS, 2018, 8
[8]   Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses [J].
Asl, Mojtaba Madadi ;
Valizadeh, Alireza ;
Tass, Peter A. .
SCIENTIFIC REPORTS, 2017, 7
[9]  
Bernardo J. M., 1994, Bayesian Theory, DOI DOI 10.1002/9780470316870
[10]  
Berry D.C., 1995, Complex problem solving: The European perspective, P131