Prosocial learning: Model-based or model-free?

被引:0
|
作者
Navidi, Parisa [1 ]
Saeedpour, Sepehr [2 ]
Ershadmanesh, Sara [3 ,4 ]
Hossein, Mostafa Miandari [5 ]
Bahrami, Bahador [6 ]
机构
[1] Inst Cognit Sci Studies, Dept Cognit Psychol, Tehran, Iran
[2] Univ Tehran, Dept Elect & Comp Engn, Tehran, Iran
[3] Inst Res Fundamental Sci, Sch Cognit Sci, Tehran, Iran
[4] MPI Biol Cybernet, Dept Computat Neurosci, Tubingen, Germany
[5] Univ Toronto, Dept Psychol, Toronto, ON, Canada
[6] Ludwig Maximilians Univ Munchen, Dept Gen Psychol & Educ, Crowd Cognit Grp, Munich, Germany
来源
PLOS ONE | 2023年 / 18卷 / 06期
基金
欧洲研究理事会;
关键词
DECISION-MAKING; SOCIAL ANXIETY; SELF; CHOICE; OTHERS; PERSPECTIVE; MECHANISMS; PREDICTION; SYSTEMS; HABITS;
D O I
10.1371/journal.pone.0287563
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Prosocial learning involves the acquisition of knowledge and skills necessary for making decisions that benefit others. We asked if, in the context of value-based decision-making, there is any difference between learning strategies for oneself vs. for others. We implemented a 2-step reinforcement learning paradigm in which participants learned, in separate blocks, to make decisions for themselves or for a present other confederate who evaluated their performance. We replicated the canonical features of the model-based and model-free reinforcement learning in our results. The behaviour of the majority of participants was best explained by a mixture of the model-based and model-free control, while most participants relied more heavily on MB control, and this strategy enhanced their learning success. Regarding our key self-other hypothesis, we did not find any significant difference between the behavioural performances nor in the model-based parameters of learning when comparing self and other conditions.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Discovering Implied Serial Order Through Model-Free and Model-Based Learning
    Jensen, Greg
    Terrace, Herbert S.
    Ferrera, Vincent P.
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [22] Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning
    Doll, Bradley B.
    Bath, Kevin G.
    Daw, Nathaniel D.
    Frank, Michael J.
    JOURNAL OF NEUROSCIENCE, 2016, 36 (04): : 1211 - 1222
  • [23] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Peter Dayan
    Kent C. Berridge
    Cognitive, Affective, & Behavioral Neuroscience, 2014, 14 : 473 - 492
  • [24] Successor features combine elements of model-free and model-based reinforcement learning
    Lehnert, Lucas
    Littman, Michael L.
    1600, Microtome Publishing (21):
  • [25] Neural Computations Underlying Arbitration between Model-Based and Model-free Learning
    Lee, Sang Wan
    Shimojo, Shinsuke
    O'Doherty, John P.
    NEURON, 2014, 81 (03) : 687 - 699
  • [26] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Dayan, Peter
    Berridge, Kent C.
    COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE, 2014, 14 (02) : 473 - 492
  • [27] Multifidelity Reinforcement Learning With Gaussian Processes: Model-Based and Model-Free Algorithms
    Suryan, Varun
    Gondhalekar, Nahush
    Tokekar, Pratap
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2020, 27 (02) : 117 - 128
  • [28] Parallel model-based and model-free reinforcement learning for card sorting performance
    Alexander Steinke
    Florian Lange
    Bruno Kopp
    Scientific Reports, 10
  • [29] ACUTE STRESS EFFECTS ON MODEL-BASED VERSUS MODEL-FREE REINFORCEMENT LEARNING
    Otto, Ross
    Raio, Candace
    Phelps, Elizabeth
    Daw, Nathaniel
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, : 178 - 179
  • [30] Connecting Model-Based and Model-Free Control With Emotion Modulation in Learning Systems
    Huang, Xiao
    Wu, Wei
    Qiao, Hong
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (08): : 4624 - 4638