Model-based aversive learning in humans is supported by preferential task state reactivation

被引:14
|
作者
Wise, Toby [1 ,2 ,3 ]
Liu, Yunzhe [4 ,5 ]
Chowdhury, Fatima [1 ,2 ,6 ]
Dolan, Raymond J. [1 ,2 ,4 ]
机构
[1] UCL, Max Planck UCL Ctr Computat Psychiat & Ageing Res, London, England
[2] UCL, Wellcome Ctr Human Neuroimaging, London, England
[3] CALTECH, Div Humanities & Social Sci, Pasadena, CA 91125 USA
[4] Beijing Normal Univ, IDG McGovern Inst Brain Res, State Key Lab Cognit Neurosci & Learning, Beijing, Peoples R China
[5] Chinese Inst Brain Res, Beijing, Peoples R China
[6] UCL Queen Sq Inst Neurol, Queen Sq MS Ctr, Dept Neuroinflammat, London, England
基金
英国惠康基金;
关键词
HIPPOCAMPAL PLACE CELLS; REVERSE REPLAY; MEMORY; REPRESENTATIONS; OSCILLATIONS; MECHANISMS; SEQUENCES; FUTURE; CORTEX; EXPERIENCE;
D O I
10.1126/sciadv.abf9616
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.
引用
收藏
页数:14
相关论文
共 44 条
  • [1] Proselfs depend more on model-based than model-free learning in a non-social probabilistic state-transition task
    Oguchi, Mineki
    Li, Yang
    Matsumoto, Yoshie
    Kiyonari, Toko
    Yamamoto, Kazuhiko
    Sugiura, Shigeki
    Sakagami, Masamichi
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [2] Evidence for Model-Based Action Planning in a Sequential Finger Movement Task
    Fermin, Alan
    Yoshida, Takehiko
    Ito, Makoto
    Yoshimoto, Junichiro
    Doya, Kenji
    JOURNAL OF MOTOR BEHAVIOR, 2010, 42 (06) : 371 - 379
  • [3] Model-based iterative learning control of Parkinsonian state in thalamic relay neuron
    Liu, Chen
    Wang, Jiang
    Li, Huiyan
    Xue, Zhiqin
    Deng, Bin
    Wei, Xile
    COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, 2014, 19 (09) : 3255 - 3266
  • [4] Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics
    Massi, Elisa
    Barthelemy, Jeanne
    Mailly, Juliane
    Dromnelle, Remi
    Canitrot, Julien
    Poniatowski, Esther
    Girard, Benoit
    Khamassi, Mehdi
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [5] Prosocial learning: Model-based or model-free?
    Navidi, Parisa
    Saeedpour, Sepehr
    Ershadmanesh, Sara
    Hossein, Mostafa Miandari
    Bahrami, Bahador
    PLOS ONE, 2023, 18 (06):
  • [6] Complementary roles of serotonin and dopamine in model-based learning
    Taira, Masakazu
    Sharpe, Melissa J.
    CURRENT OPINION IN BEHAVIORAL SCIENCES, 2025, 61
  • [7] Adjustments of Response Threshold during Task Switching: A Model-Based Functional Magnetic Resonance Imaging Study
    Mansfield, Elise L.
    Karayanidis, Frini
    Jamadar, Sharna
    Heathcote, Andrew
    Forstmann, Birte U.
    JOURNAL OF NEUROSCIENCE, 2011, 31 (41) : 14688 - 14692
  • [8] Explicit knowledge of task structure is a primary determinant of human model-based action
    Castro-Rodrigues, Pedro
    Akam, Thomas
    Snorasson, Ivar
    Camacho, Marta
    Paixao, Vitor
    Maia, Ana
    Barahona-Correa, J. Bernardo
    Dayan, Peter
    Simpson, H. Blair
    Costa, Rui M.
    Oliveira-Maia, Albino J.
    NATURE HUMAN BEHAVIOUR, 2022, 6 (08) : 1126 - 1141
  • [9] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [10] Reward-respecting subtasks for model-based reinforcement learning
    Suttona, Richard S.
    Machado, Marlos C.
    Holland, Zacharias
    Szepesvari, David
    Timbers, Finbarr
    Tanner, Brian
    White, Adam
    ARTIFICIAL INTELLIGENCE, 2023, 324