Proselfs depend more on model-based than model-free learning in a non-social probabilistic state-transition task

被引:0
|
作者
Oguchi, Mineki [1 ]
Li, Yang [1 ,2 ]
Matsumoto, Yoshie [1 ,3 ]
Kiyonari, Toko [4 ]
Yamamoto, Kazuhiko [5 ]
Sugiura, Shigeki [5 ]
Sakagami, Masamichi [1 ]
机构
[1] Tamagawa Univ, Brain Sci Inst, 6 1 1 Tamagawagakuen, Machida, Tokyo, Japan
[2] Nagoya Univ, Grad Sch Informat, Nagoya, Japan
[3] Seinan Gakuin Univ, Fac Human Sci, Dept Psychol, Fukuoka, Japan
[4] Aoyama Gakuin Univ, Sch Social Informat, Sagamihara, Kanagawa, Japan
[5] Genesis Res Inst, Nagoya, Aichi, Japan
关键词
SOCIAL VALUE ORIENTATION; DECISION-MAKING; MECHANISMS; FOUNDATIONS; COOPERATION; ARBITRATION; INFERENCE; REWARDS; SYSTEMS;
D O I
10.1038/s41598-023-27609-0
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Humans form complex societies in which we routinely engage in social decision-making regarding the allocation of resources among ourselves and others. One dimension that characterizes social decision-making in particular is whether to prioritize self-interest or respect for others-proself or prosocial. What causes this individual difference in social value orientation? Recent developments in the social dual-process theory argue that social decision-making is characterized by its underlying domain-general learning systems: the model-free and model-based systems. In line with this "learning" approach, we propose and experimentally test the hypothesis that differences in social preferences stem from which learning system is dominant in an individual. Here, we used a non-social state transition task that allowed us to assess the balance between model-free/model-based learning and investigate its relation to the social value orientations. The results showed that proselfs depended more on model-based learning, whereas prosocials depended more on model-free learning. Reward amount and reaction time analyses showed that proselfs learned the task structure earlier in the session than prosocials, reflecting their difference in model-based/model-free learning dependence. These findings support the learning hypothesis on what makes differences in social preferences and have implications for understanding the mechanisms of prosocial behavior.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning
    Swazinna, Phillip
    Udluft, Steffen
    Hein, Daniel
    Runkler, Thomas
    IFAC PAPERSONLINE, 2022, 55 (15): : 19 - 26
  • [22] Model-based and model-free learning strategies for wet clutch control
    Dutta, Abhishek
    Zhong, Yu
    Depraetere, Bruno
    Van Vaerenbergh, Kevin
    Ionescu, Clara
    Wyns, Bart
    Pinte, Gregory
    Nowe, Ann
    Swevers, Jan
    De Keyser, Robin
    MECHATRONICS, 2014, 24 (08) : 1008 - 1020
  • [23] EEG-based classification of learning strategies : model-based and model-free reinforcement learning
    Kim, Dongjae
    Weston, Charles
    Lee, Sang Wan
    2018 6TH INTERNATIONAL CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI), 2018, : 146 - 148
  • [24] Parallel model-based and model-free reinforcement learning for card sorting performance
    Steinke, Alexander
    Lange, Florian
    Kopp, Bruno
    SCIENTIFIC REPORTS, 2020, 10 (01)
  • [25] Model-free and model-based learning processes in the updating of explicit and implicit evaluations
    Kurdi, Benedek
    Gershman, Samuel J.
    Banaji, Mahzarin R.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (13) : 6035 - 6044
  • [26] Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning
    Lehnert, Lucas
    Littman, Michael L.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [27] Discovering Implied Serial Order Through Model-Free and Model-Based Learning
    Jensen, Greg
    Terrace, Herbert S.
    Ferrera, Vincent P.
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [28] Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning
    Doll, Bradley B.
    Bath, Kevin G.
    Daw, Nathaniel D.
    Frank, Michael J.
    JOURNAL OF NEUROSCIENCE, 2016, 36 (04): : 1211 - 1222
  • [29] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Peter Dayan
    Kent C. Berridge
    Cognitive, Affective, & Behavioral Neuroscience, 2014, 14 : 473 - 492
  • [30] Neural Computations Underlying Arbitration between Model-Based and Model-free Learning
    Lee, Sang Wan
    Shimojo, Shinsuke
    O'Doherty, John P.
    NEURON, 2014, 81 (03) : 687 - 699