Reinforcement learning as a robotics-inspired framework for insect navigation: from spatial representations to neural implementation

被引:1
|
作者
Lochner, Stephan [1 ]
Honerkamp, Daniel [2 ]
Valada, Abhinav [2 ]
Straw, Andrew D. [1 ,3 ]
机构
[1] Univ Freiburg, Inst Biol 1, Freiburg, Germany
[2] Univ Freiburg, Dept Comp Sci, Freiburg, Germany
[3] Univ Freiburg, Bernstein Ctr Freiburg, Freiburg, Germany
关键词
insect navigation; reinforcement learning; robot navigation; mushroom bodies; spatial representation; cognitive map; world model; MUSHROOM BODIES; MEMORY; MAP; ENVIRONMENTS; OPTIMIZATION; CONNECTIONS; INTEGRATION; MECHANISMS; DIFFERENCE; BRAIN;
D O I
10.3389/fncom.2024.1460006
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Bees are among the master navigators of the insect world. Despite impressive advances in robot navigation research, the performance of these insects is still unrivaled by any artificial system in terms of training efficiency and generalization capabilities, particularly considering the limited computational capacity. On the other hand, computational principles underlying these extraordinary feats are still only partially understood. The theoretical framework of reinforcement learning (RL) provides an ideal focal point to bring the two fields together for mutual benefit. In particular, we analyze and compare representations of space in robot and insect navigation models through the lens of RL, as the efficiency of insect navigation is likely rooted in an efficient and robust internal representation, linking retinotopic (egocentric) visual input with the geometry of the environment. While RL has long been at the core of robot navigation research, current computational theories of insect navigation are not commonly formulated within this framework, but largely as an associative learning process implemented in the insect brain, especially in the mushroom body (MB). Here we propose specific hypothetical components of the MB circuit that would enable the implementation of a certain class of relatively simple RL algorithms, capable of integrating distinct components of a navigation task, reminiscent of hierarchical RL models used in robot navigation. We discuss how current models of insect and robot navigation are exploring representations beyond classical, complete map-like representations, with spatial information being embedded in the respective latent representations to varying degrees.
引用
收藏
页数:23
相关论文
共 2 条
  • [1] Neural signatures of reinforcement learning correlate with strategy adoption during spatial navigation
    Anggraini, Dian
    Glasauer, Stefan
    Wunderlich, Klaus
    SCIENTIFIC REPORTS, 2018, 8
  • [2] Human spatial navigation: Neural representations of spatial scales and reference frames obtained from an ALE meta-analysis
    Li, Jinhui
    Zhang, Ruibin
    Liu, Siqi
    Liang, Qunjun
    Zheng, Senning
    He, Xianyou
    Huang, Ruiwang
    NEUROIMAGE, 2021, 238