The "Proactive" Model of Learning: Integrative Framework for Model-Free and Model-Based Reinforcement Learning Utilizing the Associative Learning-Based Proactive Brain Concept

被引:18
|
作者
Zsuga, Judit [1 ]
Biro, Klara [1 ]
Papp, Csaba [1 ]
Tajti, Gabor [1 ]
Gesztelyi, Rudolf [2 ]
机构
[1] Univ Debrecen, Fac Publ Hlth, Dept Hlth Syst Management & Qual Management Hlth, Nagyerdei Krt 98, H-4032 Debrecen, Hungary
[2] Univ Debrecen, Fac Pharm, Dept Pharmacol, H-4032 Debrecen, Hungary
关键词
model-free reinforcement learning; model-based reinforcement learning; reinforcement learning agent; proactive brain; default network; GOAL-DIRECTED BEHAVIORS; ORBITOFRONTAL CORTEX; DOPAMINE NEURONS; PREDICTION ERROR; PREFRONTAL CORTEX; VENTRAL STRIATUM; BASOLATERAL AMYGDALA; INCENTIVE SALIENCE; NUCLEUS-ACCUMBENS; REPRESENT REWARD;
D O I
10.1037/bne0000116
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS).
引用
收藏
页码:6 / 18
页数:13
相关论文
共 50 条
  • [41] Entity Abstraction in Visual Model-Based Reinforcement Learning
    Veerapaneni, Rishi
    Co-Reyes, John D.
    Chang, Michael
    Janner, Michael
    Finn, Chelsea
    Wu, Jiajun
    Tenenbaum, Joshua
    Levine, Sergey
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [42] Multiple model-based reinforcement learning for nonlinear control
    Samejima, K
    Katagiri, K
    Doya, K
    Kawato, M
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2006, 89 (09): : 54 - 69
  • [43] Survey of Model-Based Reinforcement Learning: Applications on Robotics
    Polydoros, Athanasios S.
    Nalpantidis, Lazaros
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2017, 86 (02) : 153 - 173
  • [44] Survey of Model-Based Reinforcement Learning: Applications on Robotics
    Athanasios S. Polydoros
    Lazaros Nalpantidis
    Journal of Intelligent & Robotic Systems, 2017, 86 : 153 - 173
  • [45] Likelihood Estimator for Multi Model-Based Reinforcement Learning
    Albarrans, Guilherme
    Freire, Valdinei
    INTELLIGENT SYSTEMS, BRACIS 2024, PT II, 2025, 15413 : 184 - 198
  • [46] Model-based reinforcement learning for approximate optimal regulation
    Kamalapurkar, Rushikesh
    Walters, Patrick
    Dixon, Warren E.
    AUTOMATICA, 2016, 64 : 94 - 104
  • [47] Offline Model-Based Reinforcement Learning for Tokamak Control
    Char, Ian
    Abbate, Joseph
    Bardoczi, Laszlo
    Boyer, Mark D.
    Chung, Youngseog
    Conlin, Rory
    Erickson, Keith
    Mehta, Viraj
    Richner, Nathan
    Kolemen, Egemen
    Schneider, Jeff
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [48] Efficient Reinforcement Learning Method for Multi-Phase Robot Manipulation Skill Acquisition via Human Knowledge, Model-Based, and Model-Free Methods
    Liu, Xing
    Liu, Zihao
    Wang, Gaozhao
    Liu, Zhengxiong
    Huang, Panfeng
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 6643 - 6652
  • [49] Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
    Li, Gen
    Wei, Yuting
    Chi, Yuejie
    Chen, Yuxin
    OPERATIONS RESEARCH, 2024, 72 (01) : 203 - 221
  • [50] Variational Inference MPC for Bayesian Model-based Reinforcement Learning
    Okada, Masashi
    Taniguchi, Tadahiro
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100