Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

被引:0
|
作者
Andrea Alamia
Victor Gauducheau
Dimitri Paisios
Rufin VanRullen
机构
[1] CerCo,Laboratoire Cognition, Langues, Langage, Ergonomie
[2] CNRS,undefined
[3] CNRS,undefined
[4] Université Toulouse,undefined
[5] ANITI,undefined
[6] Université de Toulouse,undefined
来源
Scientific Reports | / 10卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
引用
收藏
相关论文
共 50 条
  • [1] Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
    Alamia, Andrea
    Gauducheau, Victor
    Paisios, Dimitri
    VanRullen, Rufin
    SCIENTIFIC REPORTS, 2020, 10 (01)
  • [2] Feedforward Approximations to Dynamic Recurrent Network Architectures
    Muir, Dylan R.
    NEURAL COMPUTATION, 2018, 30 (02) : 546 - 567
  • [3] A Survey on Recurrent Neural Network Architectures for Sequential Learning
    Prakash, B. Shiva
    Sanjeev, K., V
    Prakash, Ramesh
    Chandrasekaran, K.
    SOFT COMPUTING FOR PROBLEM SOLVING, 2019, 817 : 57 - 66
  • [4] Comparing Artificial Neural Network Architectures for Brazilian Stock Market Prediction
    Teixeira Zavadzki de Pauli S.
    Kleina M.
    Bonat W.H.
    Annals of Data Science, 2020, 7 (04) : 613 - 628
  • [5] Opposition logic and neural network models in artificial grammar learning
    Vokey, JR
    Higham, PA
    CONSCIOUSNESS AND COGNITION, 2004, 13 (03) : 565 - 578
  • [6] Comparing forecasting performances between multilayer feedforward neural network and recurrent neural network in Malaysia's load
    Mohamed, Norizan
    Ahmad, Maizah Hura
    Ismail, Zuhaimy
    Arshad, Khairil Anuar
    JOURNAL OF INTERDISCIPLINARY MATHEMATICS, 2010, 13 (02) : 125 - 134
  • [7] Multiscale computation on feedforward neural network and recurrent neural network
    Bin Li
    Xiaoying Zhuang
    Frontiers of Structural and Civil Engineering, 2020, 14 : 1285 - 1298
  • [8] Multiscale computation on feedforward neural network and recurrent neural network
    Li, Bin
    Zhuang, Xiaoying
    FRONTIERS OF STRUCTURAL AND CIVIL ENGINEERING, 2020, 14 (06) : 1285 - 1298
  • [9] Comparing digital neural network architectures
    Kashai, Y.
    Be'ery, Y.
    Proceedings of the IFIP WG 10.5 Workshop on Silicon Architectures for Neural Nets, 1991,
  • [10] Comparing Functional Link Artificial Neural Network And Multilayer Feedforward Neural Network Model To Forecast Crude Oil Prices
    Hamdi, Manel
    Aloui, Chaker
    Nanda, Santosh Kumar
    ECONOMICS BULLETIN, 2016, 36 (04): : 2430 - +