Skill Fusion in Hybrid Robotic Framework for Visual Object Goal Navigation

被引:3
|
作者
Staroverov, Aleksei [1 ,2 ,3 ]
Muravyev, Kirill [2 ]
Yakovlev, Konstantin [2 ]
Panov, Aleksandr I. [1 ,2 ]
机构
[1] AIRI, Moscow 105064, Russia
[2] Russian Acad Sci, Fed Res Ctr Comp Sci & Control, Moscow 119333, Russia
[3] Moscow Inst Phys & Technol, Dolgoprudnyi 141707, Russia
关键词
navigation; robotics; reinforcement learning; frontier-based exploration; PATH-FOLLOWING CONTROL; SIMULTANEOUS LOCALIZATION; IMPLEMENTATION;
D O I
10.3390/robotics12040104
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In recent years, Embodied AI has become one of the main topics in robotics. For the agent to operate in human-centric environments, it needs the ability to explore previously unseen areas and to navigate to objects that humans want the agent to interact with. This task, which can be formulated as ObjectGoal Navigation (ObjectNav), is the main focus of this work. To solve this challenging problem, we suggest a hybrid framework consisting of both not-learnable and learnable modules and a switcher between them-SkillFusion. The former are more accurate, while the latter are more robust to sensors' noise. To mitigate the sim-to-real gap, which often arises with learnable methods, we suggest training them in such a way that they are less environment-dependent. As a result, our method showed top results in both the Habitat simulator and during the evaluations on a real robot.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Interactive Semantic Map Representation for Skill-Based Visual Object Navigation
    Zemskova, Tatiana
    Staroverov, Aleksei
    Muravyev, Kirill
    Yudin, Dmitry A.
    Panov, Aleksandr I.
    IEEE ACCESS, 2024, 12 : 44628 - 44639
  • [2] Semantic Policy Network for Zero-Shot Object Goal Visual Navigation
    Zhao, Qianfan
    Zhang, Lu
    He, Bin
    Liu, Zhiyong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (11) : 7655 - 7662
  • [3] NavTr: Object-Goal Navigation With Learnable Transformer Queries
    Mao, Qiuyu
    Wang, Jikai
    Xu, Meng
    Chen, Zonghai
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (12): : 11738 - 11745
  • [4] Towards Clear Evaluation of Robotic Visual Semantic Navigation
    Gutierrez-Alvarez, Carlos
    Hernandez-Garcia, Sergio
    Nasri, Nadia
    Cuesta-Infante, Alfredo
    Lopez-Sastre, Roberto J.
    2023 9TH INTERNATIONAL CONFERENCE ON AUTOMATION, ROBOTICS AND APPLICATIONS, ICARA, 2023, : 340 - 345
  • [5] Monocular Visual Navigation Algorithm for Nursing Robots via Deep Learning Oriented to Dynamic Object Goal
    Fu, Guoqiang
    Wang, Yina
    Yang, Junyou
    Wang, Shuoyu
    Yang, Guang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (01)
  • [6] Monocular Visual Navigation Algorithm for Nursing Robots via Deep Learning Oriented to Dynamic Object Goal
    Guoqiang Fu
    Yina Wang
    Junyou Yang
    Shuoyu Wang
    Guang Yang
    Journal of Intelligent & Robotic Systems, 2024, 110
  • [7] Skill-Based Hierarchical Reinforcement Learning for Target Visual Navigation
    Wang, Shuo
    Wu, Zhihao
    Hu, Xiaobo
    Lin, Youfang
    Lv, Kai
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8920 - 8932
  • [8] A hierarchical model of goal directed navigation selects trajectories in a visual environment
    Erdem, Ugur M.
    Milford, Michael J.
    Hasselmo, Michael E.
    NEUROBIOLOGY OF LEARNING AND MEMORY, 2015, 117 : 109 - 121
  • [9] Indoor Navigation Framework for Mapping and Localization of Multiple Robotic Wheelchairs
    Lokuge, Yasith
    Madumal, Prashan
    Kumara, Tharindu
    Ranasinghe, Naveen
    PROCEEDINGS FIFTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION, 2014, : 197 - 200
  • [10] Agent-Centric Relation Graph for Object Visual Navigation
    Hu, Xiaobo
    Lin, Youfang
    Wang, Shuo
    Wu, Zhihao
    Lv, Kai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 1295 - 1309