An Enhanced Deep Q Network Algorithm for Localized Obstacle Avoidance in Indoor Robot Path Planning

被引:1
作者
Chen, Cheng [1 ]
Yu, Jiantao [1 ]
Qian, Songrong [2 ]
机构
[1] Guizhou Univ, Sch Mech Engn, Guiyang 550025, Peoples R China
[2] Guizhou Univ, State Key Lab Publ Big Data, Guiyang 550025, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 23期
关键词
deep Q network; local path planning; PER-D2MQN; Gazebo simulation; mobile robot;
D O I
10.3390/app142311195
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Path planning is a key task in mobile robots, and the application of Deep Q Network (DQN) algorithm for mobile robot path planning has become a hotspot and challenge in current research. In order to solve the obstacle avoidance limitations faced by the DQN algorithm in indoor robot path planning, this paper proposes a solution based on an improved DQN algorithm. In view of the low learning efficiency of the DQN algorithm, the Duel DQN structure is introduced to enhance the performance and combined with a Prioritized Experience Replay (PER) mechanism to ensure the stability of the robot during the learning process. In addition, the idea of Munchausen Deep Q Network (M-DQN) is incorporated to guide the robot to learn the optimal policy more effectively. Based on the above improvements, the PER-D2MQN algorithm is proposed in this paper. In order to validate the effectiveness of the proposed algorithm, we conducted multidimensional simulation comparison experiments of the PER-D2MQN algorithm with DQN, Duel DQN, and the existing methodology PMR-DQN in the Gazebo simulation environment and examined the cumulative and average rewards for reaching the goal point, the number of convergent execution steps, and the time consumed by the robot in reaching the goal point. The simulation results show that the PER-D2MQN algorithm obtains the highest reward in both static and complex environments, exhibits the best convergence, and finds the goal point with the lowest average number of steps and the shortest elapsed time.
引用
收藏
页数:20
相关论文
共 38 条
[1]   Dynamic Obstacle Avoidance and Path Planning through Reinforcement Learning [J].
Almazrouei, Khawla ;
Kamel, Ibrahim ;
Rabie, Tamer .
APPLIED SCIENCES-BASEL, 2023, 13 (14)
[2]   Deep reinforcement learning for map-less goal-driven robot navigation [J].
Dobrevski, Matej ;
Skocaj, Danijel .
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2021, 18 (01)
[3]   Deep Reinforcement Learning for Indoor Mobile Robot Path Planning [J].
Gao, Junli ;
Ye, Weijie ;
Guo, Jing ;
Li, Zhongjuan .
SENSORS, 2020, 20 (19) :1-15
[4]   Dynamic path planning via Dueling Double Deep Q-Network (D3QN) with prioritized experience replay [J].
Gok, Mehmet .
APPLIED SOFT COMPUTING, 2024, 158
[5]   DM-DQN: Dueling Munchausen deep Q network for robot path planning [J].
Gu, Yuwan ;
Zhu, Zhitao ;
Lv, Jidong ;
Shi, Lin ;
Hou, Zhenjie ;
Xu, Shoukun .
COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) :4287-4300
[6]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[7]   Improved Robot Path Planning Method Based on Deep Reinforcement Learning [J].
Han, Huiyan ;
Wang, Jiaqi ;
Kuang, Liqun ;
Han, Xie ;
Xue, Hongxin .
SENSORS, 2023, 23 (12)
[8]   Retrospective-Based Deep Q-Learning Method for Autonomous Pathfinding in Three-Dimensional Curved Surface Terrain [J].
Han, Qidong ;
Feng, Shuo ;
Wu, Xing ;
Qi, Jun ;
Yu, Shaowei .
APPLIED SCIENCES-BASEL, 2023, 13 (10)
[9]   Occupancy Reward-Driven Exploration with Deep Reinforcement Learning for Mobile Robot System [J].
Kamalova, Albina ;
Lee, Suk Gyu ;
Kwon, Soon Hak .
APPLIED SCIENCES-BASEL, 2022, 12 (18)
[10]   Dynamic Obstacle Avoidance of Mobile Robots Using Real-Time Q-learning [J].
Kim, HoWon ;
Lee, WonChang .
2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,