Developing PFC representations using reinforcement learning

被引:35
|
作者
Reynolds, Jeremy R. [1 ]
O'Reilly, Randall C. [2 ]
机构
[1] Univ Denver, Dept Psychol, Denver, CO 80208 USA
[2] Univ Colorado, Dept Psychol, Boulder, CO 80309 USA
关键词
PFC; Representation; Reinforcement learning; Functional organization; DORSOLATERAL PREFRONTAL CORTEX; WORKING-MEMORY; COGNITIVE CONTROL; COMPUTATIONAL MODEL; FRONTOPOLAR CORTEX; FMRI EVIDENCE; ORGANIZATION; INFORMATION; INTEGRATION; CHILDREN;
D O I
10.1016/j.cognition.2009.05.015
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
From both functional and biological considerations, it is widely believed that action production, planning, and goal-oriented behaviors supported by the frontal cortex are organized hierarchically [Fuster (1991); Koechlin, E., Ody, C., & Kouneiher, F. (2003). Neuroscience: The architecture of cognitive control in the human prefrontal cortex. Science, 424, 1181-1184; Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure Of behavior. New York: Holt]. However, the nature of the different levels of the hierarchy remains unclear, and little attention has been paid to the origins of such a hierarchy. We address these issues through biologically-inspired computational models that develop representations through reinforcement learning. We explore several different factors in these models that might plausibly give rise to a hierarchical organization of representations within the PFC, including an initial connectivity hierarchy within PFC, a hierarchical set of connections between PFC and subcortical structures controlling it, and differential synaptic plasticity schedules. Simulation results indicate that architectural constraints contribute to the segregation of different types of representations, and that this segregation facilitates learning. These findings are consistent with the idea that there is a functional hierarchy in PFC, as captured in our earlier computational models of PFC function and a growing body of empirical data. (C) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:281 / 292
页数:12
相关论文
共 50 条
  • [1] Using Predictive Representations to Improve Generalization in Reinforcement Learning
    Rafols, Eddie J.
    Ring, Mark B.
    Sutton, Richard S.
    Tanner, Brian
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 835 - 840
  • [2] Learning Action Representations for Reinforcement Learning
    Chandak, Yash
    Theocharous, Georgios
    Kostas, James E.
    Jordan, Scott M.
    Thomas, Philip S.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [3] Reinforcement Learning with Prototypical Representations
    Yarats, Denis
    Fergus, Rob
    Lazaric, Alessandro
    Pinto, Lerrel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [4] Graph Representations for Reinforcement Learning
    Schab, Esteban
    Casanova, Carlos
    Piccoli, Fabiana
    JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY, 2024, 24 (01): : 29 - 38
  • [5] On the Generalization of Representations in Reinforcement Learning
    Le Lan, Charline
    Tu, Stephen
    Oberman, Adam
    Agarwal, Rishabh
    Bellemare, Marc
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [6] Offline reinforcement learning with representations for actions
    Lou, Xingzhou
    Yin, Qiyue
    Zhang, Junge
    Yu, Chao
    He, Zhaofeng
    Cheng, Nengjie
    Huang, Kaiqi
    INFORMATION SCIENCES, 2022, 610 : 746 - 758
  • [7] Exchangeable Input Representations for Reinforcement Learning
    Mern, John
    Sadigh, Dorsa
    Kochenderfer, Mykel J.
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 3971 - 3976
  • [8] Using Part-Based Representations for Explainable Deep Reinforcement Learning
    Kirtas, Manos
    Tsampazis, Konstantinos
    Avramelou, Loukia
    Passalis, Nikolaos
    Tefas, Anastasios
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 420 - 432
  • [9] Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition
    Chandak, Yash
    Thakoor, Shantanu
    Guo, Zhaohan Daniel
    Tang, Yunhao
    Munos, Remi
    Dabney, Will
    Borsa, Diana
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [10] Developing an Humanitarian Logistics Framework Using a Reinforcement Learning Technique
    Alqumaizi, Khalid I.
    Dutta, Ashit Kumar
    Alshehri, Sultan
    WORLD JOURNAL OF ENTREPRENEURSHIP MANAGEMENT AND SUSTAINABLE DEVELOPMENT, 2023, 19 (3-4) : 15 - 32