Reinforcement imitation learning for reliable and efficient autonomous navigation in complex environments

被引:0
|
作者
Kumar D. [1 ]
机构
[1] Computer Science and Engineering, United College of Engineering and Research, Uttar Pradesh, Naini, Prayagraj
关键词
Autonomous navigation; Deep neural networks; Dynamic environments; Imitation learning; Q-learning; Reinforcement learning;
D O I
10.1007/s00521-024-09678-y
中图分类号
学科分类号
摘要
Reinforcement learning (RL) and imitation learning (IL) are quite two useful machine learning techniques that were shown to be potential in enhancing navigation performance. Basically, both of these methods try to find a policy decision function in a reinforcement learning fashion or through imitation. In this paper, we propose a novel algorithm named Reinforcement Imitation Learning (RIL) that naturally combines RL and IL together in accelerating more reliable and efficient navigation in dynamic environments. RIL is a hybrid approach that utilizes RL for policy optimization and IL as some kind of learning from expert demonstrations with the inclusion of guidance. We present the comparison of the convergence of RIL with conventional RL and IL to provide the support for our algorithm’s performance in a dynamic environment with moving obstacles. The results of the testing indicate that the RIL algorithm has better collision avoidance and navigation efficiency than traditional methods. The proposed RIL algorithm has broad application prospects in many specific areas such as an autonomous driving, unmanned aerial vehicles, and robots. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:11945 / 11961
页数:16
相关论文
共 50 条
  • [21] Holistic Deep-Reinforcement-Learning-based Training for Autonomous Navigation in Crowded Environments
    Kaestner, Linh
    Meusel, Marvin
    Bhuiyan, Teham
    Lambrecht, Jens
    2023 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM, 2023, : 1302 - 1308
  • [22] A Multi-Objective Reinforcement Learning Based Controller for Autonomous Navigation in Challenging Environments
    Dooraki, Amir Ramezani
    Lee, Deok-Jin
    MACHINES, 2022, 10 (07)
  • [23] Autonomous navigation of a mobile robot in dynamic indoor environments using SLAM and reinforcement learning
    Chewu, C. C. E.
    Kumar, V. Manoj
    2ND INTERNATIONAL CONFERENCE ON ADVANCES IN MECHANICAL ENGINEERING (ICAME 2018), 2018, 402
  • [24] Quadrotor Autonomous Navigation in Semi-Known Environments Based on Deep Reinforcement Learning
    Ou, Jiajun
    Guo, Xiao
    Lou, Wenjie
    Zhu, Ming
    REMOTE SENSING, 2021, 13 (21)
  • [25] Autonomous Navigation of Wheelchairs in Indoor Environments using Deep Reinforcement Learning and Computer Vision
    Afonso, Paulo de Almeida
    Ferreira, Paulo Roberto, Jr.
    2023 LATIN AMERICAN ROBOTICS SYMPOSIUM, LARS, 2023 BRAZILIAN SYMPOSIUM ON ROBOTICS, SBR, AND 2023 WORKSHOP ON ROBOTICS IN EDUCATION, WRE, 2023, : 260 - 265
  • [26] Reinforced Imitation: Sample Efficient Deep Reinforcement Learning for Mapless Navigation by Leveraging Prior Demonstrations
    Pfeiffer, Mark
    Shukla, Samarth
    Turchetta, Matteo
    Cadena, Cesar
    Krause, Andreas
    Siegwart, Roland
    Nieto, Juan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 4423 - 4430
  • [27] Benchmarking Reinforcement Learning Techniques for Autonomous Navigation
    Xu, Zifan
    Liu, Bo
    Xiao, Xuesu
    Nair, Anirudh
    Stone, Peter
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9224 - 9230
  • [28] Neural inverse reinforcement learning in autonomous navigation
    Xia, Chen
    El Kamel, Abdelkader
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 84 : 1 - 14
  • [29] Autonomous Quantum Reinforcement Learning for Robot Navigation
    Mohan, Arjun
    Jayabalan, Sudharsan
    Mohan, Archana
    PROCEEDINGS OF 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND APPLICATIONS, 2017, 467 : 351 - 357
  • [30] Autonomous Navigation of Quadrotors in Dynamic Complex Environments
    Li, Ruocheng
    Xin, Bin
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2025, 72 (03) : 2790 - 2800