Reinforcement imitation learning for reliable and efficient autonomous navigation in complex environments

被引:0
|
作者
Kumar D. [1 ]
机构
[1] Computer Science and Engineering, United College of Engineering and Research, Uttar Pradesh, Naini, Prayagraj
关键词
Autonomous navigation; Deep neural networks; Dynamic environments; Imitation learning; Q-learning; Reinforcement learning;
D O I
10.1007/s00521-024-09678-y
中图分类号
学科分类号
摘要
Reinforcement learning (RL) and imitation learning (IL) are quite two useful machine learning techniques that were shown to be potential in enhancing navigation performance. Basically, both of these methods try to find a policy decision function in a reinforcement learning fashion or through imitation. In this paper, we propose a novel algorithm named Reinforcement Imitation Learning (RIL) that naturally combines RL and IL together in accelerating more reliable and efficient navigation in dynamic environments. RIL is a hybrid approach that utilizes RL for policy optimization and IL as some kind of learning from expert demonstrations with the inclusion of guidance. We present the comparison of the convergence of RIL with conventional RL and IL to provide the support for our algorithm’s performance in a dynamic environment with moving obstacles. The results of the testing indicate that the RIL algorithm has better collision avoidance and navigation efficiency than traditional methods. The proposed RIL algorithm has broad application prospects in many specific areas such as an autonomous driving, unmanned aerial vehicles, and robots. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:11945 / 11961
页数:16
相关论文
共 50 条
  • [31] Learning Reliable and Efficient Navigation with a Humanoid
    Osswald, Stefan
    Hornung, Armin
    Bennewitz, Maren
    2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, : 2375 - 2380
  • [32] Perceptual Interpretation for Autonomous Navigation through Dynamic Imitation Learning
    Silver, David
    Bagnell, J. Andrew
    Stentz, Anthony
    ROBOTICS RESEARCH, 2011, 70 : 433 - 449
  • [33] Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
    Fei WANG
    Xiaoping ZHU
    Zhou ZHOU
    Yang TANG
    Chinese Journal of Aeronautics, 2024, 37 (03) : 237 - 257
  • [34] Learning Autonomous Navigation in Unmapped and Unknown Environments
    He, Naifeng
    Yang, Zhong
    Bu, Chunguang
    Fan, Xiaoliang
    Wu, Jiying
    Sui, Yaoyu
    Que, Wenqiang
    SENSORS, 2024, 24 (18)
  • [35] Goal-Guided Transformer-Enabled Reinforcement Learning for Efficient Autonomous Navigation
    Huang, Wenhui
    Zhou, Yanxin
    He, Xiangkun
    Lv, Chen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (02) : 1832 - 1845
  • [36] Reinforcement Learning for Robot Navigation in Nondeterministic Environments
    Liu, Xiaoyun
    Zhou, Qingrui
    Ren, Hailin
    Sun, Changhao
    PROCEEDINGS OF 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS), 2018, : 615 - 619
  • [37] Autonomous navigation of UAV in multi-obstacle environments based on a Deep Reinforcement Learning approach
    Zhang, Sitong
    Li, Yibing
    Dong, Qianhui
    Applied Soft Computing, 2022, 115
  • [38] An Extended Navigation Framework for Autonomous Mobile Robot in Dynamic Environments using Reinforcement Learning Algorithm
    Nguyen Van Dinh
    Nguyen Hong Viet
    Lan Anh Nguyen
    Hong Toan Dinh
    Nguyen Tran Hiep
    Pham Trung Dung
    Trung-Dung Ngo
    Xuan-Tung Truong
    2017 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2017, : 336 - 339
  • [39] Autonomous navigation of UAV in multi-obstacle environments based on a Deep Reinforcement Learning approach
    Zhang, Sitong
    Li, Yibing
    Dong, Qianhui
    APPLIED SOFT COMPUTING, 2022, 115
  • [40] Autonomous Navigation for Exploration of Unknown Environments and Collision Avoidance in Mobile Robots Using Reinforcement Learning
    Cardona, G. A.
    Bravo, C.
    Quesada, W.
    Ruiz, D.
    Obeng, M.
    Wu, X.
    Calderon, J. M.
    2019 IEEE SOUTHEASTCON, 2019,