Dopamine encoding of novelty facilitates efficient uncertainty-driven exploration

被引:0
作者
Wang, Yuhao [1 ]
Lak, Armin [2 ]
Manohar, Sanjay G. [3 ]
Bogacz, Rafal [1 ]
机构
[1] Univ Oxford, MRC Brain Network Dynam Unit, Oxford, England
[2] Univ Oxford, Dept Physiol Anat & Genet, Oxford, England
[3] Univ Oxford, Nuffield Dept Clin Neurosci, Oxford, England
基金
英国惠康基金; 英国科研创新办公室; 英国医学研究理事会; 英国生物技术与生命科学研究理事会;
关键词
STRIATAL DOPAMINE; NEURONS; VARIABILITY; PREDICTION; HUMANS; CHOICE; SYSTEM;
D O I
10.1371/journal.pcbi.1011516
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
When facing an unfamiliar environment, animals need to explore to gain new knowledge about which actions provide reward, but also put the newly acquired knowledge to use as quickly as possible. Optimal reinforcement learning strategies should therefore assess the uncertainties of these action-reward associations and utilise them to inform decision making. We propose a novel model whereby direct and indirect striatal pathways act together to estimate both the mean and variance of reward distributions, and mesolimbic dopaminergic neurons provide transient novelty signals, facilitating effective uncertainty-driven exploration. We utilised electrophysiological recording data to verify our model of the basal ganglia, and we fitted exploration strategies derived from the neural model to data from behavioural experiments. We also compared the performance of directed exploration strategies inspired by our basal ganglia model with other exploration algorithms including classic variants of upper confidence bound (UCB) strategy in simulation. The exploration strategies inspired by the basal ganglia model can achieve overall superior performance in simulation, and we found qualitatively similar results in fitting model to behavioural data compared with the fitting of more idealised normative models with less implementation level detail. Overall, our results suggest that transient dopamine levels in the basal ganglia that encode novelty could contribute to an uncertainty representation which efficiently drives exploration in reinforcement learning. Humans and other animals learn from rewards and losses resulting from their actions to maximise their chances of survival. In many cases, a trial-and-error process is necessary to determine the most rewarding action in a certain context. During this process, determining how much resource should be allocated to acquiring information ("exploration") and how much should be allocated to utilising the existing information to maximise reward ("exploitation") is key to the overall effectiveness, i.e., the maximisation of total reward obtained with a certain amount of effort. We propose a theory whereby an area within the mammalian brain called the basal ganglia integrates current knowledge about the mean reward, reward uncertainty and novelty of an action in order to implement an algorithm which optimally allocates resources between exploration and exploitation. We verify our theory using behavioural experiments and electrophysiological recording, and show in simulations that the model also achieves good performance in comparison with established benchmark algorithms.
引用
收藏
页数:27
相关论文
共 49 条
  • [1] Variability in Action Selection Relates to Striatal Dopamine 2/3 Receptor Availability in Humans: A PET Neuroimaging Study Using Reinforcement Learning and Active Inference Models
    Adams, Rick A.
    Moutoussis, Michael
    Nour, Matthew M.
    Dahoun, Tarik
    Lewis, Declan
    Illingworth, Benjamin
    Veronese, Mattia
    Mathys, Christoph
    de Boer, Lieke
    Guitart-Masip, Marc
    Friston, Karl J.
    Howes, Oliver D.
    Roiser, Jonathan P.
    [J]. CEREBRAL CORTEX, 2020, 30 (06) : 3573 - 3589
  • [2] Behavioral functions of the mesolimbic dopaminergic system: An affective neuroethological perspective
    Alcaro, Antonio
    Huber, Robert
    Panksepp, Jaak
    [J]. BRAIN RESEARCH REVIEWS, 2007, 56 (02) : 283 - 321
  • [3] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [4] What does dopamine mean?
    Berke, Joshua D.
    [J]. NATURE NEUROSCIENCE, 2018, 21 (06) : 787 - 793
  • [5] Chapelle O., 2011, ADV NEURAL INFORM PR, V24
  • [6] Cieslak Przemyslaw Eligiusz, 2018, eNeuro, V5, DOI 10.1523/ENEURO.0331-18.2018
  • [7] Dopamine blockade impairs the exploration-exploitation trade-off in rats
    Cinotti, Francois
    Fresno, Virginie
    Aklil, Nassim
    Coutureau, Etienne
    Girard, Benoit
    Marchand, Alain R.
    Khamassi, Mehdi
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [8] Opponent Actor Learning (OpAL): Modeling Interactive Effects of Striatal Dopamine on Reinforcement Learning and Choice Incentive
    Collins, Anne G. E.
    Frank, Michael J.
    [J]. PSYCHOLOGICAL REVIEW, 2014, 121 (03) : 337 - 366
  • [9] Dopamine Modulates Novelty Seeking Behavior During Decision Making
    Costa, Vincent D.
    Tran, Valery L.
    Turchi, Janita
    Averbeck, Bruno B.
    [J]. BEHAVIORAL NEUROSCIENCE, 2014, 128 (05) : 556 - 566
  • [10] Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method
    Cousineau, Denis
    [J]. TUTORIALS IN QUANTITATIVE METHODS FOR PSYCHOLOGY, 2005, 1 (01) : 42 - 45