Neural signatures of reinforcement learning correlate with strategy adoption during spatial navigation

被引:19
|
作者
Anggraini, Dian [1 ,4 ]
Glasauer, Stefan [2 ,3 ,4 ]
Wunderlich, Klaus [1 ,3 ,4 ]
机构
[1] Ludwig Maximilians Univ Munchen, Dept Psychol, D-80802 Munich, Germany
[2] Ludwig Maximilians Univ Munchen, Klinikum Grosshadern, Dept Neurol, Ctr Sensorimotor Res, D-81377 Munich, Germany
[3] Bernstein Ctr Computat Neurosci Munich, D-82152 Martinsried, Germany
[4] Ludwig Maximilians Univ Munchen, Grad Sch Syst Neurosci, D-82152 Martinsried, Germany
来源
SCIENTIFIC REPORTS | 2018年 / 8卷
关键词
DECISION-MAKING; BASAL GANGLIA; HUMAN HIPPOCAMPUS; COGNITIVE MAPS; GRID CELLS; MEMORY; SYSTEMS; CORTEX; PLACE; REPRESENTATION;
D O I
10.1038/s41598-018-28241-z
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Human navigation is generally believed to rely on two types of strategy adoption, route-based and map-based strategies. Both types of navigation require making spatial decisions along the traversed way although formal computational and neural links between navigational strategies and mechanisms of value-based decision making have so far been underexplored in humans. Here we employed functional magnetic resonance imaging (fMRI) while subjects located different objects in a virtual environment. We then modelled their paths using reinforcement learning (RL) algorithms, which successfully explained decision behavior and its neural correlates. Our results show that subjects used a mixture of route and map-based navigation and their paths could be well explained by the model-free and model-based RL algorithms. Furthermore, the value signals of model-free choices during routebased navigation modulated the BOLD signals in the ventro-medial prefrontal cortex (vmPFC), whereas the BOLD signals in parahippocampal and hippocampal regions pertained to model-based value signals during map-based navigation. Our findings suggest that the brain might share computational mechanisms and neural substrates for navigation and value-based decisions such that model-free choice guides route-based navigation and model-based choice directs map-based navigation. These findings open new avenues for computational modelling of wayfinding by directing attention to value-based decision, differing from common direction and distances approaches.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing
    Wang, Tao
    Xue, Ouyang
    Sun, Dingmi
    Chen, Yimin
    Li, Hao
    ELECTRONICS, 2024, 13 (12)
  • [32] Comparing Knowledge-Based Reinforcement Learning to Neural Networks in a Strategy Game
    Nechepurenko, Liudmyla
    Voss, Viktor
    Gritsenko, Vyacheslav
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2020, 2020, 12344 : 312 - 328
  • [33] Supervised and Reinforcement Learning in Neural Network Based Approach to the Battleship Game Strategy
    Clementis, Ladislav
    Advances in Intelligent Systems and Computing, 2013, 210 : 191 - 200
  • [34] A multiple neural network and reinforcement learning-based strategy for process control
    Dutta, Debaprasad
    Upreti, Simant R.
    JOURNAL OF PROCESS CONTROL, 2023, 121 : 103 - 118
  • [35] The influence of intentional and incidental learning on acquiring spatial knowledge during navigation
    van Asselen, M
    Fritschy, E
    Postma, A
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2006, 70 (02): : 151 - 156
  • [36] The influence of intentional and incidental learning on acquiring spatial knowledge during navigation
    Marieke van Asselen
    Eva Fritschy
    Albert Postma
    Psychological Research, 2006, 70 : 151 - 156
  • [37] MGRL: Graph neural network based inference in a Markov network with reinforcement learning for visual navigation
    Lu Y.
    Chen Y.
    Zhao D.
    Li D.
    Neurocomputing, 2021, 421 : 140 - 150
  • [38] MGRL: Graph neural network based inference in a Markov network with reinforcement learning for visual navigation
    Lu, Yi
    Chen, Yaran
    Zhao, Dongbin
    Li, Dong
    NEUROCOMPUTING, 2021, 421 : 140 - 150
  • [39] Simulation of Mobile Robot Navigation Utilizing Reinforcement and Unsupervised Weightless Neural Network Learning Algorithm
    Yusof, Yusman
    Mansor, H. M. Asri H.
    Baba, H. M. Dani
    2015 IEEE STUDENT CONFERENCE ON RESEARCH AND DEVELOPMENT (SCORED), 2015, : 123 - 128
  • [40] Modular neural network and classical reinforcement learning for autonomous robot navigation:: Inhibiting undesirable behaviors
    Antonelo, Eric A.
    Baerveldt, Albert-Jan
    Rognvaldsson, Thorsteinn
    Figueiredo, Mauricio
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 498 - +