Developing PFC representations using reinforcement learning

被引:36
作者
Reynolds, Jeremy R. [1 ]
O'Reilly, Randall C. [2 ]
机构
[1] Univ Denver, Dept Psychol, Denver, CO 80208 USA
[2] Univ Colorado, Dept Psychol, Boulder, CO 80309 USA
关键词
PFC; Representation; Reinforcement learning; Functional organization; DORSOLATERAL PREFRONTAL CORTEX; WORKING-MEMORY; COGNITIVE CONTROL; COMPUTATIONAL MODEL; FRONTOPOLAR CORTEX; FMRI EVIDENCE; ORGANIZATION; INFORMATION; INTEGRATION; CHILDREN;
D O I
10.1016/j.cognition.2009.05.015
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
From both functional and biological considerations, it is widely believed that action production, planning, and goal-oriented behaviors supported by the frontal cortex are organized hierarchically [Fuster (1991); Koechlin, E., Ody, C., & Kouneiher, F. (2003). Neuroscience: The architecture of cognitive control in the human prefrontal cortex. Science, 424, 1181-1184; Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure Of behavior. New York: Holt]. However, the nature of the different levels of the hierarchy remains unclear, and little attention has been paid to the origins of such a hierarchy. We address these issues through biologically-inspired computational models that develop representations through reinforcement learning. We explore several different factors in these models that might plausibly give rise to a hierarchical organization of representations within the PFC, including an initial connectivity hierarchy within PFC, a hierarchical set of connections between PFC and subcortical structures controlling it, and differential synaptic plasticity schedules. Simulation results indicate that architectural constraints contribute to the segregation of different types of representations, and that this segregation facilitates learning. These findings are consistent with the idea that there is a functional hierarchy in PFC, as captured in our earlier computational models of PFC function and a growing body of empirical data. (C) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:281 / 292
页数:12
相关论文
共 50 条
  • [31] Genome Assembly Using Reinforcement Learning
    Xavier, Roberto
    de Souza, Kleber Padovani
    Chateau, Annie
    Alves, Ronnie
    ADVANCES IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY, BSB 2019, 2020, 11347 : 16 - 28
  • [32] Reinforcement learning with multiple representations in the basal ganglia loops for sequential motor control
    Nakahara, H
    Doya, K
    Hikosaka, O
    Nagano, S
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 1553 - 1558
  • [33] Tuning pianos using reinforcement learning
    Millard, Matthew
    Tizhoosh, Hamid R.
    APPLIED ACOUSTICS, 2007, 68 (05) : 576 - 593
  • [34] Redirection Controller Using Reinforcement Learning
    Chang, Yuchen
    Matsumoto, Keigo
    Narumi, Takuji
    Tanikawa, Tomohiro
    Hirose, Michitaka
    IEEE ACCESS, 2021, 9 : 145083 - 145097
  • [35] Automatic berthing using supervised learning and reinforcement learning
    Shimizu, Shoma
    Nishihara, Kenta
    Miyauchi, Yoshiki
    Wakita, Kouki
    Suyama, Rin
    Maki, Atsuo
    Shirakawa, Shinichi
    OCEAN ENGINEERING, 2022, 265
  • [36] StARformer: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning
    Shang, Jinghuan
    Kahatapitiya, Kumara
    Li, Xiang
    Ryoo, Michael S.
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 462 - 479
  • [37] Relational Verification using Reinforcement Learning
    Chen, Jia
    Wei, Jiayi
    Feng, Yu
    Bastani, Osbert
    Dillig, Isil
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (OOPSLA):
  • [38] Device Codesign using Reinforcement Learning
    Cardwell, Suma G.
    Patel, Karan
    Schuman, Catherine D.
    Smith, J. Darby
    Kwon, Jaesuk
    Maicke, Andrew
    Arzate, Jared
    Incorvia, Jean Anne C.
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [39] Using reinforcement learning for image thresholding
    Shokri, M
    Tizhoosh, HR
    CCECE 2003: CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, VOLS 1-3, PROCEEDINGS: TOWARD A CARING AND HUMANE TECHNOLOGY, 2003, : 1231 - 1234
  • [40] Autonomous drifting using reinforcement learning
    Orgován L.
    Bécsi T.
    Aradi S.
    Periodica Polytechnica Transportation Engineering, 2021, 49 (03): : 292 - 300