A reinforcement learning diffusion decision model for value-based decisions

被引:94
|
作者
Fontanesi, Laura [1 ]
Gluth, Sebastian [1 ]
Spektor, Mikhail S. [1 ]
Rieskamp, Joerg [1 ]
机构
[1] Univ Basel, Fac Psychol, Missionsstr 62a, CH-4055 Basel, Switzerland
基金
瑞士国家科学基金会;
关键词
Decision-making; Computational modeling; Bayesian inference and parameter estimation; Response time models; CHOICE; EXPLAIN; BRAIN; FMRI;
D O I
10.3758/s13423-018-1554-2
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Psychological models of value-based decision-making describe how subjective values are formed and mapped to single choices. Recently, additional efforts have been made to describe the temporal dynamics of these processes by adopting sequential sampling models from the perceptual decision-making tradition, such as the diffusion decision model (DDM). These models, when applied to value-based decision-making, allow mapping of subjective values not only to choices but also to response times. However, very few attempts have been made to adapt these models to situations in which decisions are followed by rewards, thereby producing learning effects. In this study, we propose a new combined reinforcement learning diffusion decision model (RLDDM) and test it on a learning task in which pairs of options differ with respect to both value difference and overall value. We found that participants became more accurate and faster with learning, responded faster and more accurately when options had more dissimilar values, and decided faster when confronted with more attractive (i.e., overall more valuable) pairs of options. We demonstrate that the suggested RLDDM can accommodate these effects and does so better than previously proposed models. To gain a better understanding of the model dynamics, we also compare it to standard DDMs and reinforcement learning models. Our work is a step forward towards bridging the gap between two traditions of decision-making research.
引用
收藏
页码:1099 / 1121
页数:23
相关论文
共 50 条
  • [21] Correction to: Towards improving decision making and estimating the value of decisions in value-based software engineering: the VALUE framework
    Emilia Mendes
    Pilar Rodriguez
    Vitor Freitas
    Simon Baker
    Mohamed Amine Atoui
    Software Quality Journal, 2018, 26 : 1595 - 1596
  • [22] Value-Based Reinforcement Learning for Selective Disassembly Sequence Optimization Problems Demonstrating and Comparing a Proposed Model
    Qin, Shujin
    Bi, Zhiliang
    Wang, Jiacun
    Liu, Shixin
    Guo, Xiwang
    Zhao, Ziyan
    Qi, Liang
    IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, 2024, 10 (02): : 24 - 31
  • [24] A Conceptual Model of Agreement Options for Value-based Group Decision on Value Management
    Utomo, Christiono
    Zin, Rosli Mohamad
    Zakaria, Rozana
    Rahmawati, Yani
    JURNAL TEKNOLOGI, 2014, 70 (07):
  • [25] Reward Learning and Value-Based Decision-Making in Suicidal Behavior
    Dombrovski, Alexandre Y.
    BIOLOGICAL PSYCHIATRY, 2011, 69 (09) : 10S - 11S
  • [26] The lateral prefrontal cortex and complex value-based learning and decision making
    Dixon, Matthew L.
    Christoff, Kalina
    NEUROSCIENCE AND BIOBEHAVIORAL REVIEWS, 2014, 45 : 9 - 18
  • [27] Memory States Influence Value-Based Decisions
    Duncan, Katherine D.
    Shohamy, Daphna
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2016, 145 (11) : 1420 - 1426
  • [28] Evidence for hippocampal dependence of value-based decisions
    Enkavi, A. Zeynep
    Weber, Bernd
    Zweyer, Iris
    Wagner, Jan
    Elger, Christian E.
    Weber, Elke U.
    Johnson, Eric J.
    SCIENTIFIC REPORTS, 2017, 7
  • [29] Evidence for hippocampal dependence of value-based decisions
    A. Zeynep Enkavi
    Bernd Weber
    Iris Zweyer
    Jan Wagner
    Christian E. Elger
    Elke U. Weber
    Eric J. Johnson
    Scientific Reports, 7
  • [30] MetaLight: Value-Based Meta-Reinforcement Learning for Traffic Signal Control
    Zang, Xinshi
    Yao, Huaxiu
    Zheng, Guanjie
    Xu, Nan
    Xu, Kai
    Li, Zhenhui
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1153 - 1160