Model-based aversive learning in humans is supported by preferential task state reactivation

被引:14
|
作者
Wise, Toby [1 ,2 ,3 ]
Liu, Yunzhe [4 ,5 ]
Chowdhury, Fatima [1 ,2 ,6 ]
Dolan, Raymond J. [1 ,2 ,4 ]
机构
[1] UCL, Max Planck UCL Ctr Computat Psychiat & Ageing Res, London, England
[2] UCL, Wellcome Ctr Human Neuroimaging, London, England
[3] CALTECH, Div Humanities & Social Sci, Pasadena, CA 91125 USA
[4] Beijing Normal Univ, IDG McGovern Inst Brain Res, State Key Lab Cognit Neurosci & Learning, Beijing, Peoples R China
[5] Chinese Inst Brain Res, Beijing, Peoples R China
[6] UCL Queen Sq Inst Neurol, Queen Sq MS Ctr, Dept Neuroinflammat, London, England
基金
英国惠康基金;
关键词
HIPPOCAMPAL PLACE CELLS; REVERSE REPLAY; MEMORY; REPRESENTATIONS; OSCILLATIONS; MECHANISMS; SEQUENCES; FUTURE; CORTEX; EXPERIENCE;
D O I
10.1126/sciadv.abf9616
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.
引用
收藏
页数:14
相关论文
共 44 条
  • [31] Reaching trajectories unravel modality-dependent temporal dynamics of the automatic process in the Simon task: a model-based approach
    Salzer, Yael
    Friedman, Jason
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2020, 84 (06): : 1700 - 1713
  • [32] Integrating cortico-limbic-basal ganglia architectures for learning model-based and model-free navigation strategies
    Khamassi, Mehdi
    Humphries, Mark D.
    FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2012, 6
  • [33] Balanced difficulty task finder: an adaptive recommendation method for learning tasks based on the concept of state of flow
    Yazidi, Anis
    Mofrad, Asieh Abolpour
    Goodwin, Morten
    Hammer, Hugo Lewi
    Arntzen, Erik
    COGNITIVE NEURODYNAMICS, 2020, 14 (05) : 675 - 687
  • [34] A model-based constrained deep learning clustering approach for spatially resolved single-cell data
    Lin, Xiang
    Gao, Le
    Whitener, Nathan
    Ahmed, Ashley
    Wei, Zhi
    GENOME RESEARCH, 2022, 32 (10) : 1906 - 1917
  • [35] Model-based analysis of the acute effects of transcutaneous magnetic spinal cord stimulation on micturition after spinal cord injury in humans
    Fardadi, Mahshid
    Leiter, J. C.
    Lu, Daniel C.
    Iwasaki, Tetsuya
    PLOS COMPUTATIONAL BIOLOGY, 2024, 20 (07)
  • [36] A two-way memory model for language production and task-based language learning
    Narcy-Combes, Jean-Paul
    RECHERCHE ET PRATIQUES PEDAGOGIQUES EN LANGUES DE SPECIALITE-CAHIERS DE L APLIUT, 2006, 25 (02): : 77 - 87
  • [37] From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning
    Decker, Johannes H.
    Otto, A. Ross
    Daw, Nathaniel D.
    Hartley, Catherine A.
    PSYCHOLOGICAL SCIENCE, 2016, 27 (06) : 848 - 858
  • [38] The connectivity domain: Analyzing resting state fMRI data using feature-based data-driven and model-based methods
    Iraji, Armin
    Calhoun, Vince D.
    Wiseman, Natalie M.
    Davoodi-Bojd, Esmaeil
    Avanaki, Mohammad R. N.
    Haacke, E. Mark
    Kou, Zhifeng
    NEUROIMAGE, 2016, 134 : 494 - 507
  • [39] Episodic memory governs choices: An RNN-based reinforcement learning model for decision-making task
    Zhang, Xiaohan
    Liu, Lu
    Long, Guodong
    Jiang, Jing
    Liu, Shenquan
    NEURAL NETWORKS, 2021, 134 : 1 - 10
  • [40] Model-based representational similarity analysis of blood-oxygen-level-dependent fMRI captures threat learning in social interactions
    Undeger, Irem
    Visser, Renee M.
    Becker, Nina
    de Boer, Lieke
    Golkar, Armita
    Olsson, Andreas
    ROYAL SOCIETY OPEN SCIENCE, 2021, 8 (11):