Model-based aversive learning in humans is supported by preferential task state reactivation

被引:14
|
作者
Wise, Toby [1 ,2 ,3 ]
Liu, Yunzhe [4 ,5 ]
Chowdhury, Fatima [1 ,2 ,6 ]
Dolan, Raymond J. [1 ,2 ,4 ]
机构
[1] UCL, Max Planck UCL Ctr Computat Psychiat & Ageing Res, London, England
[2] UCL, Wellcome Ctr Human Neuroimaging, London, England
[3] CALTECH, Div Humanities & Social Sci, Pasadena, CA 91125 USA
[4] Beijing Normal Univ, IDG McGovern Inst Brain Res, State Key Lab Cognit Neurosci & Learning, Beijing, Peoples R China
[5] Chinese Inst Brain Res, Beijing, Peoples R China
[6] UCL Queen Sq Inst Neurol, Queen Sq MS Ctr, Dept Neuroinflammat, London, England
基金
英国惠康基金;
关键词
HIPPOCAMPAL PLACE CELLS; REVERSE REPLAY; MEMORY; REPRESENTATIONS; OSCILLATIONS; MECHANISMS; SEQUENCES; FUTURE; CORTEX; EXPERIENCE;
D O I
10.1126/sciadv.abf9616
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Case-Based Task Generalization in Model-Based Reinforcement Learning
    Zholus, Artem
    Panov, Aleksandr, I
    ARTIFICIAL GENERAL INTELLIGENCE, AGI 2021, 2022, 13154 : 344 - 354
  • [2] Humans primarily use model-based inference in the two-stage task
    Feher da Silva, Carolina
    Hare, Todd A.
    NATURE HUMAN BEHAVIOUR, 2020, 4 (10) : 1053 - +
  • [3] Humans primarily use model-based inference in the two-stage task
    Carolina Feher da Silva
    Todd A. Hare
    Nature Human Behaviour, 2020, 4 : 1053 - 1066
  • [4] Aversive Model-based Learning: Presynaptic Dopamine and mu-Opioid Receptor Availability
    Voon, Valerie
    Joutsa, Juho
    Kaasinen, Valterri
    BIOLOGICAL PSYCHIATRY, 2017, 81 (10) : S155 - S155
  • [5] Model-based analysis of learning latent structures in probabilistic reversal learning task
    Masumi, Akira
    Sato, Takashi
    ARTIFICIAL LIFE AND ROBOTICS, 2021, 26 (03) : 275 - 282
  • [6] Model-based analysis of learning latent structures in probabilistic reversal learning task
    Akira Masumi
    Takashi Sato
    Artificial Life and Robotics, 2021, 26 : 275 - 282
  • [7] Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
    Dongjae Kim
    Geon Yeong Park
    John P. O′Doherty
    Sang Wan Lee
    Nature Communications, 10
  • [8] Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
    Kim, Dongjae
    Park, Geon Yeong
    O'Doherty, John P.
    Lee, Sang Wan
    NATURE COMMUNICATIONS, 2019, 10 (1)
  • [9] Latent-state and model-based learning in PTSD
    Cisler, Josh M.
    Dunsmoor, Joseph E.
    Fonzo, Gregory A.
    Nemeroff, Charles B.
    TRENDS IN NEUROSCIENCES, 2024, 47 (02) : 150 - 162
  • [10] Model-Based Reinforcement Learning with Multi-task Offline Pretraining
    Pan, Minting
    Zheng, Yitao
    Wang, Yunbo
    Yang, Xiaokang
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT VII, ECML PKDD 2024, 2024, 14947 : 22 - 39