A reinforcement learning diffusion decision model for value-based decisions

被引:96
|
作者
Fontanesi, Laura [1 ]
Gluth, Sebastian [1 ]
Spektor, Mikhail S. [1 ]
Rieskamp, Joerg [1 ]
机构
[1] Univ Basel, Fac Psychol, Missionsstr 62a, CH-4055 Basel, Switzerland
基金
瑞士国家科学基金会;
关键词
Decision-making; Computational modeling; Bayesian inference and parameter estimation; Response time models; CHOICE; EXPLAIN; BRAIN; FMRI;
D O I
10.3758/s13423-018-1554-2
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Psychological models of value-based decision-making describe how subjective values are formed and mapped to single choices. Recently, additional efforts have been made to describe the temporal dynamics of these processes by adopting sequential sampling models from the perceptual decision-making tradition, such as the diffusion decision model (DDM). These models, when applied to value-based decision-making, allow mapping of subjective values not only to choices but also to response times. However, very few attempts have been made to adapt these models to situations in which decisions are followed by rewards, thereby producing learning effects. In this study, we propose a new combined reinforcement learning diffusion decision model (RLDDM) and test it on a learning task in which pairs of options differ with respect to both value difference and overall value. We found that participants became more accurate and faster with learning, responded faster and more accurately when options had more dissimilar values, and decided faster when confronted with more attractive (i.e., overall more valuable) pairs of options. We demonstrate that the suggested RLDDM can accommodate these effects and does so better than previously proposed models. To gain a better understanding of the model dynamics, we also compare it to standard DDMs and reinforcement learning models. Our work is a step forward towards bridging the gap between two traditions of decision-making research.
引用
收藏
页码:1099 / 1121
页数:23
相关论文
共 50 条
  • [31] Challenges of interpreting frontal neurons during value-based decision-making
    Wallis, Jonathan D.
    Rich, Erin L.
    FRONTIERS IN NEUROSCIENCE, 2011, 5
  • [32] Decision Making Based on Reinforcement Learning and Emotion Learning for Social Behavior
    Matsuda, Atsushi
    Misawa, Hideaki
    Horio, Keiichi
    IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ 2011), 2011, : 2714 - 2719
  • [33] Value-based decision-making in regular alcohol consumers following experimental manipulation of alcohol value
    Copeland, Amber
    Stafford, Tom
    Field, Matt
    ADDICTIVE BEHAVIORS, 2024, 156
  • [34] Value-based decision-making in regular alcohol consumers following experimental manipulation of alcohol value
    Copeland, Amber
    Stafford, Tom
    Field, Matt
    ADDICTIVE BEHAVIORS, 2024, 156
  • [35] Neurocognitive mechanisms underlying value-based decision-making: from core values to economic value
    Brosch, Tobias
    Sander, David
    FRONTIERS IN HUMAN NEUROSCIENCE, 2013, 7
  • [36] Value-based attention but not divisive normalization influences decisions with multiple alternatives
    Gluth, Sebastian
    Kern, Nadja
    Kortmann, Maria
    Vitali, Cecile L.
    NATURE HUMAN BEHAVIOUR, 2020, 4 (06) : 634 - 645
  • [37] Classic EEG motor potentials track the emergence of value-based decisions
    Gluth, Sebastian
    Rieskamp, Joerg
    Buechel, Christian
    NEUROIMAGE, 2013, 79 : 394 - 403
  • [38] Confidence in Evaluations and Value-Based Decisions Reflects Variation in Experienced Values
    Quandt, Julian
    Figner, Bernd
    Holland, Rob W.
    Veling, Harm
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2022, 151 (04) : 820 - 836
  • [39] Value-Based Decision Making: Decision Theory Meets e-Government
    Sundberg, Leif
    Gidlund, Katarina L.
    ELECTRONIC GOVERNMENT (EGOV 2017), 2017, 10428 : 351 - 358
  • [40] Quantifying the time for accurate EEG decoding of single value-based decisions
    Tzovara, Athina
    Chavarriaga, Ricardo
    De Lucia, Marzia
    JOURNAL OF NEUROSCIENCE METHODS, 2015, 250 : 114 - 125