Proselfs depend more on model-based than model-free learning in a non-social probabilistic state-transition task

被引:0
|
作者
Oguchi, Mineki [1 ]
Li, Yang [1 ,2 ]
Matsumoto, Yoshie [1 ,3 ]
Kiyonari, Toko [4 ]
Yamamoto, Kazuhiko [5 ]
Sugiura, Shigeki [5 ]
Sakagami, Masamichi [1 ]
机构
[1] Tamagawa Univ, Brain Sci Inst, 6 1 1 Tamagawagakuen, Machida, Tokyo, Japan
[2] Nagoya Univ, Grad Sch Informat, Nagoya, Japan
[3] Seinan Gakuin Univ, Fac Human Sci, Dept Psychol, Fukuoka, Japan
[4] Aoyama Gakuin Univ, Sch Social Informat, Sagamihara, Kanagawa, Japan
[5] Genesis Res Inst, Nagoya, Aichi, Japan
关键词
SOCIAL VALUE ORIENTATION; DECISION-MAKING; MECHANISMS; FOUNDATIONS; COOPERATION; ARBITRATION; INFERENCE; REWARDS; SYSTEMS;
D O I
10.1038/s41598-023-27609-0
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Humans form complex societies in which we routinely engage in social decision-making regarding the allocation of resources among ourselves and others. One dimension that characterizes social decision-making in particular is whether to prioritize self-interest or respect for others-proself or prosocial. What causes this individual difference in social value orientation? Recent developments in the social dual-process theory argue that social decision-making is characterized by its underlying domain-general learning systems: the model-free and model-based systems. In line with this "learning" approach, we propose and experimentally test the hypothesis that differences in social preferences stem from which learning system is dominant in an individual. Here, we used a non-social state transition task that allowed us to assess the balance between model-free/model-based learning and investigate its relation to the social value orientations. The results showed that proselfs depended more on model-based learning, whereas prosocials depended more on model-free learning. Reward amount and reaction time analyses showed that proselfs learned the task structure earlier in the session than prosocials, reflecting their difference in model-based/model-free learning dependence. These findings support the learning hypothesis on what makes differences in social preferences and have implications for understanding the mechanisms of prosocial behavior.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Successor features combine elements of model-free and model-based reinforcement learning
    Lehnert, Lucas
    Littman, Michael L.
    1600, Microtome Publishing (21):
  • [32] Multifidelity Reinforcement Learning With Gaussian Processes: Model-Based and Model-Free Algorithms
    Suryan, Varun
    Gondhalekar, Nahush
    Tokekar, Pratap
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2020, 27 (02) : 117 - 128
  • [33] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Dayan, Peter
    Berridge, Kent C.
    COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE, 2014, 14 (02) : 473 - 492
  • [34] Parallel model-based and model-free reinforcement learning for card sorting performance
    Alexander Steinke
    Florian Lange
    Bruno Kopp
    Scientific Reports, 10
  • [35] ACUTE STRESS EFFECTS ON MODEL-BASED VERSUS MODEL-FREE REINFORCEMENT LEARNING
    Otto, Ross
    Raio, Candace
    Phelps, Elizabeth
    Daw, Nathaniel
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, : 178 - 179
  • [36] Connecting Model-Based and Model-Free Control With Emotion Modulation in Learning Systems
    Huang, Xiao
    Wu, Wei
    Qiao, Hong
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (08): : 4624 - 4638
  • [37] Model-based analysis of learning latent structures in probabilistic reversal learning task
    Masumi, Akira
    Sato, Takashi
    ARTIFICIAL LIFE AND ROBOTICS, 2021, 26 (03) : 275 - 282
  • [38] Model-based analysis of learning latent structures in probabilistic reversal learning task
    Akira Masumi
    Takashi Sato
    Artificial Life and Robotics, 2021, 26 : 275 - 282
  • [39] Predictive representations can link model-based reinforcement learning to model-free mechanisms
    Russek, Evan M.
    Momennejad, Ida
    Botvinick, Matthew M.
    Gershman, Samuel J.
    Daw, Nathaniel D.
    PLOS COMPUTATIONAL BIOLOGY, 2017, 13 (09)
  • [40] Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
    Chebotar, Yevgen
    Hausman, Karol
    Zhang, Marvin
    Sukhatme, Gaurav
    Schaal, Stefan
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70