Neural Signatures of Prediction Errors in a Decision-Making Task Are Modulated by Action Execution Failures

被引:12
作者
McDougle, Samuel D. [1 ]
Butcher, Peter A. [2 ]
Parvin, Darius E. [1 ]
Mushtaq, Fasial [3 ]
Niv, Yael [2 ,4 ]
Ivry, Richard B. [1 ,5 ]
Taylor, Jordan A. [2 ,4 ]
机构
[1] Univ Calif Berkeley, Dept Psychol, 2121 Berkeley Way, Berkeley, CA 94704 USA
[2] Princeton Univ, Dept Psychol, South Dr, Princeton, NJ 08540 USA
[3] Univ Leeds, Sch Psychol, 4 Lifton Pl, Leeds LS2 9JZ, W Yorkshire, England
[4] Princeton Univ, Princeton Neurosci Inst, South Dr, Princeton, NJ 08540 USA
[5] Univ Calif Berkeley, Li Ka Shing Ctr, Helen Wills Neurosci Inst, Berkeley, CA 94720 USA
基金
英国工程与自然科学研究理事会;
关键词
REWARD; SELECTION; CHOICES; HUMANS;
D O I
10.1016/j.cub.2019.04.011
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
Decisions must be implemented through actions, and actions are prone to error. As such, when an expected outcome is not obtained, an individual should be sensitive to not only whether the choice itself was suboptimal but also whether the action required to indicate that choice was executed successfully. The intelligent assignment of credit to action execution versus action selection has clear ecological utility for the learner. To explore this, we used a modified version of a classic reinforcement learning task in which feedback indicated whether negative prediction errors were, or were not, associated with execution errors. Using fMRI, we asked if prediction error computations in the human striatum, a key substrate in reinforcement learning and decision making, are modulated when a failure in action execution results in the negative outcome. Participants were more tolerant of non-rewarded outcomes when these resulted from execution errors versus when execution was successful, but reward was withheld. Consistent with this behavior, a model-driven analysis of neural activity revealed an attenuation of the signal associated with negative reward prediction errors in the striatum following execution failures. These results converge with other lines of evidence suggesting that prediction errors in the mesostriatal dopamine system integrate high-level information during the evaluation of instantaneous reward outcomes.
引用
收藏
页码:1606 / +
页数:13
相关论文
共 39 条
  • [1] NEW LOOK AT STATISTICAL-MODEL IDENTIFICATION
    AKAIKE, H
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1974, AC19 (06) : 716 - 723
  • [2] Changes in Performance Monitoring During Sensorimotor Adaptation
    Anguera, Joaquin A.
    Seidler, Rachael D.
    Gehring, William J.
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 2009, 102 (03) : 1868 - 1879
  • [3] Barto A. G., 1995, MODELS INFORM PROCES, P215
  • [4] Reminders of past choices bias decisions for reward in humans
    Bornstein, Aaron M.
    Khaw, Mel W.
    Shohamy, Daphna
    Daw, Nathaniel D.
    [J]. NATURE COMMUNICATIONS, 2017, 8
  • [5] Cerebellar networks with the cerebral cortex and basal ganglia
    Bostan, Andreea C.
    Dum, Richard P.
    Strick, Peter L.
    [J]. TRENDS IN COGNITIVE SCIENCES, 2013, 17 (05) : 241 - 254
  • [6] The psychophysics toolbox
    Brainard, DH
    [J]. SPATIAL VISION, 1997, 10 (04): : 433 - 436
  • [7] Working Memory Load Strengthens Reward Prediction Errors
    Collins, Anne G. E.
    Ciullo, Brittany
    Frank, Michael J.
    Badre, David
    [J]. JOURNAL OF NEUROSCIENCE, 2017, 37 (16) : 4332 - 4342
  • [8] Working Memory Contributions to Reinforcement Learning Impairments in Schizophrenia
    Collins, Anne G. E.
    Brown, Jaime K.
    Gold, James M.
    Waltz, James A.
    Frank, Michael J.
    [J]. JOURNAL OF NEUROSCIENCE, 2014, 34 (41) : 13747 - 13756
  • [9] Cortical substrates for exploratory decisions in humans
    Daw, Nathaniel D.
    O'Doherty, John P.
    Dayan, Peter
    Seymour, Ben
    Dolan, Raymond J.
    [J]. NATURE, 2006, 441 (7095) : 876 - 879
  • [10] Model-Based Influences on Humans' Choices and Striatal Prediction Errors
    Daw, Nathaniel D.
    Gershman, Samuel J.
    Seymour, Ben
    Dayan, Peter
    Dolan, Raymond J.
    [J]. NEURON, 2011, 69 (06) : 1204 - 1215