Reinforcement learning as a robotics-inspired framework for insect navigation: from spatial representations to neural implementation

被引:1
|
作者
Lochner, Stephan [1 ]
Honerkamp, Daniel [2 ]
Valada, Abhinav [2 ]
Straw, Andrew D. [1 ,3 ]
机构
[1] Univ Freiburg, Inst Biol 1, Freiburg, Germany
[2] Univ Freiburg, Dept Comp Sci, Freiburg, Germany
[3] Univ Freiburg, Bernstein Ctr Freiburg, Freiburg, Germany
关键词
insect navigation; reinforcement learning; robot navigation; mushroom bodies; spatial representation; cognitive map; world model; MUSHROOM BODIES; MEMORY; MAP; ENVIRONMENTS; OPTIMIZATION; CONNECTIONS; INTEGRATION; MECHANISMS; DIFFERENCE; BRAIN;
D O I
10.3389/fncom.2024.1460006
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Bees are among the master navigators of the insect world. Despite impressive advances in robot navigation research, the performance of these insects is still unrivaled by any artificial system in terms of training efficiency and generalization capabilities, particularly considering the limited computational capacity. On the other hand, computational principles underlying these extraordinary feats are still only partially understood. The theoretical framework of reinforcement learning (RL) provides an ideal focal point to bring the two fields together for mutual benefit. In particular, we analyze and compare representations of space in robot and insect navigation models through the lens of RL, as the efficiency of insect navigation is likely rooted in an efficient and robust internal representation, linking retinotopic (egocentric) visual input with the geometry of the environment. While RL has long been at the core of robot navigation research, current computational theories of insect navigation are not commonly formulated within this framework, but largely as an associative learning process implemented in the insect brain, especially in the mushroom body (MB). Here we propose specific hypothetical components of the MB circuit that would enable the implementation of a certain class of relatively simple RL algorithms, capable of integrating distinct components of a navigation task, reminiscent of hierarchical RL models used in robot navigation. We discuss how current models of insect and robot navigation are exploring representations beyond classical, complete map-like representations, with spatial information being embedded in the respective latent representations to varying degrees.
引用
收藏
页数:23
相关论文
共 6 条
  • [1] Neural signatures of reinforcement learning correlate with strategy adoption during spatial navigation
    Dian Anggraini
    Stefan Glasauer
    Klaus Wunderlich
    Scientific Reports, 8
  • [2] Neural signatures of reinforcement learning correlate with strategy adoption during spatial navigation
    Anggraini, Dian
    Glasauer, Stefan
    Wunderlich, Klaus
    SCIENTIFIC REPORTS, 2018, 8
  • [3] Spatial mental representations: the influence of age on route learning from maps and navigation
    Muffato, Veronica
    Meneghetti, Chiara
    De Beni, Rossana
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2019, 83 (08): : 1836 - 1850
  • [4] Spatial mental representations: the influence of age on route learning from maps and navigation
    Veronica Muffato
    Chiara Meneghetti
    Rossana De Beni
    Psychological Research, 2019, 83 : 1836 - 1850
  • [5] Human spatial navigation: Neural representations of spatial scales and reference frames obtained from an ALE meta-analysis
    Li, Jinhui
    Zhang, Ruibin
    Liu, Siqi
    Liang, Qunjun
    Zheng, Senning
    He, Xianyou
    Huang, Ruiwang
    NEUROIMAGE, 2021, 238
  • [6] Knowledge-Assisted Deep Reinforcement Learning in 5G Scheduler Design: From Theoretical Framework to Implementation
    Gu, Zhouyou
    She, Changyang
    Hardjawana, Wibowo
    Lumb, Simon
    McKechnie, David
    Essery, Todd
    Vucetic, Branka
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (07) : 2014 - 2028