Developing PFC representations using reinforcement learning

被引:35
|
作者
Reynolds, Jeremy R. [1 ]
O'Reilly, Randall C. [2 ]
机构
[1] Univ Denver, Dept Psychol, Denver, CO 80208 USA
[2] Univ Colorado, Dept Psychol, Boulder, CO 80309 USA
关键词
PFC; Representation; Reinforcement learning; Functional organization; DORSOLATERAL PREFRONTAL CORTEX; WORKING-MEMORY; COGNITIVE CONTROL; COMPUTATIONAL MODEL; FRONTOPOLAR CORTEX; FMRI EVIDENCE; ORGANIZATION; INFORMATION; INTEGRATION; CHILDREN;
D O I
10.1016/j.cognition.2009.05.015
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
From both functional and biological considerations, it is widely believed that action production, planning, and goal-oriented behaviors supported by the frontal cortex are organized hierarchically [Fuster (1991); Koechlin, E., Ody, C., & Kouneiher, F. (2003). Neuroscience: The architecture of cognitive control in the human prefrontal cortex. Science, 424, 1181-1184; Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure Of behavior. New York: Holt]. However, the nature of the different levels of the hierarchy remains unclear, and little attention has been paid to the origins of such a hierarchy. We address these issues through biologically-inspired computational models that develop representations through reinforcement learning. We explore several different factors in these models that might plausibly give rise to a hierarchical organization of representations within the PFC, including an initial connectivity hierarchy within PFC, a hierarchical set of connections between PFC and subcortical structures controlling it, and differential synaptic plasticity schedules. Simulation results indicate that architectural constraints contribute to the segregation of different types of representations, and that this segregation facilitates learning. These findings are consistent with the idea that there is a functional hierarchy in PFC, as captured in our earlier computational models of PFC function and a growing body of empirical data. (C) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:281 / 292
页数:12
相关论文
共 50 条
  • [21] Developing multi-agent adversarial environment using reinforcement learning and imitation learning
    Ziyao Han
    Yupeng Liang
    Kazuhiro Ohkura
    Artificial Life and Robotics, 2023, 28 : 703 - 709
  • [22] Learning State Representations for Query Optimization with Deep Reinforcement Learning
    Ortiz, Jennifer
    Balazinska, Magdalena
    Gehrke, Johannes
    Keerthi, S. Sathiya
    PROCEEDINGS OF THE SECOND WORKSHOP ON DATA MANAGEMENT FOR END-TO-END MACHINE LEARNING, 2018,
  • [23] Learning Representations in Model-Free Hierarchical Reinforcement Learning
    Rafati, Jacob
    Noelle, David C.
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10009 - 10010
  • [24] On Developing a UAV Pursuit-Evasion Policy Using Reinforcement Learning
    Vlahov, Bogdan
    Squires, Eric
    Strickland, Laura
    Pippin, Charles
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 859 - 864
  • [25] Pretraining Representations for Data-Efficient Reinforcement Learning
    Schwarzer, Max
    Rajkumar, Nitarshan
    Noukhovitch, Michael
    Anand, Ankesh
    Charlin, Laurent
    Hjelm, Devon
    Bachman, Philip
    Courville, Aaron
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [26] Temporal-Difference Reinforcement Learning with Distributed Representations
    Kurth-Nelson, Zeb
    Redish, A. David
    PLOS ONE, 2009, 4 (10):
  • [27] Representations for Stable Off-Policy Reinforcement Learning
    Ghosh, Dibya
    Bellemare, Marc G.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [28] Investigating the properties of neural network representations in reinforcement learning
    Wang, Han
    Miahi, Erfan
    White, Martha
    Machado, Marlos C.
    Abbas, Zaheer
    Kumaraswamy, Raksha
    Liu, Vincent
    White, Adam
    ARTIFICIAL INTELLIGENCE, 2024, 330
  • [29] Representations for Stable Off-Policy Reinforcement Learning
    Ghosh, Dibya
    Bellemare, Marc G.
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [30] Conditional Mutual Information for Disentangled Representations in Reinforcement Learning
    Dunion, Mhairi
    McInroe, Trevor
    Luck, Kevin Sebastian
    Hanna, Josiah P.
    Albrecht, Stefano V.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,