An optimized Q-Learning algorithm for mobile robot local path planning

被引:47
作者
Zhou, Qian [1 ]
Lian, Yang [2 ,3 ]
Wu, Jiayang [1 ]
Zhu, Mengyue [1 ]
Wang, Haiyong [2 ,3 ]
Cao, Jinli [4 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Modern Posts, Nanjing 210003, Jiangsu, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Sch Software, Nanjing 210037, Jiangsu, Peoples R China
[3] Nanjing Univ Posts & Telecommun, Sch Cyberspace Secur, Nanjing 210037, Jiangsu, Peoples R China
[4] La Trobe Univ, Dept Comp Sci & Comp Engn, Melbourne, Australia
基金
中国国家自然科学基金;
关键词
Mobile robot; Q-Learning algorithm; Local path planning; Reinforcement learning; Adaptive learning rate;
D O I
10.1016/j.knosys.2024.111400
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Q-Learning algorithm is a reinforcement learning technique widely used in various fields such as path planning, intelligent transportation, penetration testing, among others. It primarily involves the interaction between an agent and its environment, enabling the agent to learn an optimal strategy that maximizes cumulative rewards. Most non-agent-based path planning algorithms face challenges in exploring completely unknown environments effectively, lacking efficient perception in unfamiliar settings. Additionally, many QLearning-based path planning algorithms suffer from slow convergence and susceptibility to getting stuck in local optimal solutions. To address these issues, an optimized version of the Q-Learning algorithm (Optimized Q-Learning, O-QL) is proposed and applied to local path planning of mobile robots. O-QL introduces novel Q -table initialization methods, incorporates a new action-selection policy, and a new reward function, and adapts the Root Mean Square Propagation (RMSprop) method in the learning rate adjustment. This adjustment dynamically tunes the learning rate based on gradient changes to accelerate learning and enhance path planning efficiency. Simulation experiments are carried out in three maze environments with different complexity levels, and the performance of the algorithm in local path planning is evaluated using steps, exploration reward, learning rate change and running time. The experimental results demonstrate that O-QL exhibits improvements across all four metrics compared to existing algorithms.
引用
收藏
页数:9
相关论文
共 41 条
[1]  
Bai Z., 2022, 2022 6 CAA INT C VEH, P1
[2]   A Review of Motion Planning for Highway Autonomous Driving [J].
Claussmann, Laurene ;
Revilloud, Marc ;
Gruyer, Dominique ;
Glaser, Sebastien .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (05) :1826-1848
[3]  
Cong YZ, 2009, IEEE ASME INT C ADV, P851, DOI [10.1109/AIM.2009.5229903, 10.1109/ICEPT.2009.5270540]
[4]   Relational reinforcement learning [J].
Dzeroski, S ;
De Raedt, L ;
Driessens, K .
MACHINE LEARNING, 2001, 43 (1-2) :7-52
[5]  
Gao L., 2018, J. Jilin Univ. (Inf. Sci. Ed.), V36, P439
[6]   Dynamic path planning for autonomous driving on various roads with avoidance of static and moving obstacles [J].
Hu, Xuemin ;
Chen, Long ;
Tang, Bo ;
Cao, Dongpu ;
He, Haibo .
MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2018, 100 :482-500
[7]   Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks [J].
Iiduka, Hideaki .
IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) :13250-13261
[8]   A Deterministic Improved Q-Learning for Path Planning of a Mobile Robot [J].
Konar, Amit ;
Chakraborty, Indrani Goswami ;
Singh, Sapam Jitu ;
Jain, Lakhmi C. ;
Nagar, Atulya K. .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2013, 43 (05) :1141-1153
[9]   INNES: An intelligent network penetration testing model based on deep reinforcement learning [J].
Li, Qianyu ;
Hu, Miao ;
Hao, Hao ;
Zhang, Min ;
Li, Yang .
APPLIED INTELLIGENCE, 2023, 53 (22) :27110-27127
[10]  
Li SD, 2015, 2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, P409, DOI 10.1109/ICInfA.2015.7279322