Model-based aversive learning in humans is supported by preferential task state reactivation

被引:14
|
作者
Wise, Toby [1 ,2 ,3 ]
Liu, Yunzhe [4 ,5 ]
Chowdhury, Fatima [1 ,2 ,6 ]
Dolan, Raymond J. [1 ,2 ,4 ]
机构
[1] UCL, Max Planck UCL Ctr Computat Psychiat & Ageing Res, London, England
[2] UCL, Wellcome Ctr Human Neuroimaging, London, England
[3] CALTECH, Div Humanities & Social Sci, Pasadena, CA 91125 USA
[4] Beijing Normal Univ, IDG McGovern Inst Brain Res, State Key Lab Cognit Neurosci & Learning, Beijing, Peoples R China
[5] Chinese Inst Brain Res, Beijing, Peoples R China
[6] UCL Queen Sq Inst Neurol, Queen Sq MS Ctr, Dept Neuroinflammat, London, England
基金
英国惠康基金;
关键词
HIPPOCAMPAL PLACE CELLS; REVERSE REPLAY; MEMORY; REPRESENTATIONS; OSCILLATIONS; MECHANISMS; SEQUENCES; FUTURE; CORTEX; EXPERIENCE;
D O I
10.1126/sciadv.abf9616
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] A Model-Based GNN for Learning Precoding
    Guo, Jia
    Yang, Chenyang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 6983 - 6999
  • [42] Model-Based Online Learning With Kernels
    Li, Guoqi
    Wen, Changyun
    Li, Zheng Guo
    Zhang, Aimin
    Yang, Feng
    Mao, Kezhi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (03) : 356 - 369
  • [43] Model-based task planning for loading operation in mining
    Sarata, S
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 439 - 445
  • [44] The ubiquity of model-based reinforcement learning
    Doll, Bradley B.
    Simon, Dylan A.
    Daw, Nathaniel D.
    CURRENT OPINION IN NEUROBIOLOGY, 2012, 22 (06) : 1075 - 1081
  • [45] Model-based learning for diagnostic tasks
    Specht, D.
    Weiss, S.
    Mult, H.C.
    Spur, G.
    CIRP Annals - Manufacturing Technology, 1992, 41 (01) : 557 - 560
  • [46] Tool Supported Model-Based Safety Analysis and Optimization
    Guedemann, Matthias
    Lipaczewski, Michael
    Ortmeier, Frank
    2011 IEEE 17TH PACIFIC RIM INTERNATIONAL SYMPOSIUM ON DEPENDABLE COMPUTING (PRDC), 2011, : 294 - 295
  • [47] Task Model-Based Usability Evaluation for Smart Environments
    Propp, Stefan
    Buchholz, Gregor
    Forbrig, Peter
    ENGINEERING INTERACTIVE SYSTEMS 2008, PROCEEDINGS, 2008, 5247 : 29 - 40
  • [48] Probabilistic model-based imitation learning
    Englert, Peter
    Paraschos, Alexandros
    Deisenroth, Marc Peter
    Peters, Jan
    ADAPTIVE BEHAVIOR, 2013, 21 (05) : 388 - 403
  • [49] Model-based online learning of POMDPs
    Shani, G
    Brafman, RI
    Shimony, SE
    MACHINE LEARNING: ECML 2005, PROCEEDINGS, 2005, 3720 : 353 - 364
  • [50] Multiple model-based reinforcement learning
    Doya, K
    Samejima, K
    Katagiri, K
    Kawato, M
    NEURAL COMPUTATION, 2002, 14 (06) : 1347 - 1369