Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

被引:0
作者
Andrea Alamia
Victor Gauducheau
Dimitri Paisios
Rufin VanRullen
机构
[1] CerCo,Laboratoire Cognition, Langues, Langage, Ergonomie
[2] CNRS,undefined
[3] CNRS,undefined
[4] Université Toulouse,undefined
[5] ANITI,undefined
[6] Université de Toulouse,undefined
来源
Scientific Reports | / 10卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
引用
收藏
相关论文
共 50 条
[31]   A Novel Evolutionary Feedforward Neural Network with Artificial Immunology [J].
宫新保 ;
臧小刚 ;
周希朗 .
Journal of Shanghai Jiaotong University, 2003, (01) :40-42
[32]   Efficient learning algorithm for feedforward neural network [J].
Du, Zhengchun ;
Liu, Yutian ;
Xia, Daozhi .
Tien Tzu Hsueh Pao/Acta Electronica Sinica, 1995, 23 (08) :57-61
[33]   Quantum artificial neural network architectures and components [J].
Narayanan, A ;
Menneer, T .
INFORMATION SCIENCES, 2000, 128 (3-4) :231-255
[34]   An efficient learning algorithm for feedforward neural network [J].
Tan, SB ;
Gu, J .
ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2004, 2004, 3315 :767-777
[35]   ITERATED LEARNING IN A LAYERED FEEDFORWARD NEURAL NETWORK [J].
MEIR, R ;
DOMANY, E .
PHYSICAL REVIEW A, 1988, 37 (07) :2660-2668
[36]   Identification of Nonlinear Models with Feedforward Neural Network and Digital Recurrent Network [J].
Rankovic, Vesna M. ;
Nikolic, Ilija Z. .
FME TRANSACTIONS, 2008, 36 (02) :87-92
[37]   Neural network with deep learning architectures [J].
Patel, Hima ;
Thakkar, Amit ;
Pandya, Mrudang ;
Makwana, Kamlesh .
JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES, 2018, 39 (01) :31-38
[38]   Bayesian Learning of Neural Network Architectures [J].
Dikov, Georgi ;
Bayer, Justin .
22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 :730-738
[39]   Neural Network Architectures and Learning Algorithms [J].
Wilamowski, Bogdan M. .
IEEE INDUSTRIAL ELECTRONICS MAGAZINE, 2009, 3 (04) :56-63
[40]   Grammar transfer in a second order recurrent neural network [J].
Negishi, M ;
Hanson, SJ .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 14, VOLS 1 AND 2, 2002, 14 :67-73