The "Proactive" Model of Learning: Integrative Framework for Model-Free and Model-Based Reinforcement Learning Utilizing the Associative Learning-Based Proactive Brain Concept

被引:18
|
作者
Zsuga, Judit [1 ]
Biro, Klara [1 ]
Papp, Csaba [1 ]
Tajti, Gabor [1 ]
Gesztelyi, Rudolf [2 ]
机构
[1] Univ Debrecen, Fac Publ Hlth, Dept Hlth Syst Management & Qual Management Hlth, Nagyerdei Krt 98, H-4032 Debrecen, Hungary
[2] Univ Debrecen, Fac Pharm, Dept Pharmacol, H-4032 Debrecen, Hungary
关键词
model-free reinforcement learning; model-based reinforcement learning; reinforcement learning agent; proactive brain; default network; GOAL-DIRECTED BEHAVIORS; ORBITOFRONTAL CORTEX; DOPAMINE NEURONS; PREDICTION ERROR; PREFRONTAL CORTEX; VENTRAL STRIATUM; BASOLATERAL AMYGDALA; INCENTIVE SALIENCE; NUCLEUS-ACCUMBENS; REPRESENT REWARD;
D O I
10.1037/bne0000116
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS).
引用
收藏
页码:6 / 18
页数:13
相关论文
共 50 条
  • [21] Model-Based Reinforcement Learning in Robotics: A Survey
    Sun S.
    Lan X.
    Zhang H.
    Zheng N.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (01): : 1 - 16
  • [22] Model-Based Reinforcement Learning With Isolated Imaginations
    Pan, Minting
    Zhu, Xiangming
    Zheng, Yitao
    Wang, Yunbo
    Yang, Xiaokang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2788 - 2803
  • [23] Model-based reinforcement learning with dimension reduction
    Tangkaratt, Voot
    Morimoto, Jun
    Sugiyama, Masashi
    NEURAL NETWORKS, 2016, 84 : 1 - 16
  • [24] Model-based reinforcement learning: a computational model and an fMRI study
    Yoshida, W
    Ishii, S
    NEUROCOMPUTING, 2005, 63 : 253 - 269
  • [25] Model-based reinforcement learning with model error and its application
    Tajima, Yoshiyuki
    Onisawa, Takehisa
    PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-8, 2007, : 1333 - 1336
  • [26] A Brief Survey of Model-Based Reinforcement Learning Techniques
    Pal, Constantin-Valentin
    Leon, Florin
    2020 24TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2020, : 92 - 97
  • [27] Processing speed enhances model-based over model-free reinforcement learning in the presence of high working memory functioning
    Schad, Daniel J.
    Juenger, Elisabeth
    Sebold, Miriam
    Garbusow, Maria
    Bernhardt, Nadine
    Javadi, Amir-Homayoun
    Zimmermann, Ulrich S.
    Smolka, Michael N.
    Heinz, Andreas
    Rapp, Michael A.
    Huys, Quentin J. M.
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [28] Dopamine, prediction error and associative learning: A model-based account
    Smith, Andrew
    Li, Ming
    Becker, Sue
    Kapur, Shitij
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 2006, 17 (01) : 61 - 84
  • [29] A Configurable Model-Based Reinforcement Learning Framework for Disaggregated Storage Systems
    Jeong, Seunghwan
    Woo, Honguk
    IEEE ACCESS, 2023, 11 : 14876 - 14891
  • [30] Model-Based Reinforcement Learning Framework of Online Network Resource Allocation
    Bakhshi, Bahador
    Mangues-Bafalluy, Josep
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4456 - 4461