Deep reinforcement learning-based local path planning in dynamic environments for mobile robot☆

被引:3
作者
Tao, Bodong [1 ]
Kim, Jae-Hoon [1 ]
机构
[1] Natl Korea Maritime & Ocean Univ, Dept Comp Engn & Interdisciplinary Major Maritime, 727 Taejong Ro, Busan 49112, South Korea
基金
新加坡国家研究基金会;
关键词
Path planning; Deep reinforcement learning; Adaptive soft actor-critic; Mobile robots; Dynamic window approach; Tile coding; HIERARCHICAL CONTROL; ALGORITHM;
D O I
10.1016/j.jksuci.2024.102254
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Path planning for robots in dynamic environments is a challenging task, as it requires balancing obstacle avoidance, trajectory smoothness, and path length during real-time planning.This paper proposes an algorithm called Adaptive Soft Actor-Critic (ASAC), which combines the Soft Actor-Critic (SAC) algorithm, tile coding, and the Dynamic Window Approach (DWA) to enhance path planning capabilities. ASAC leverages SAC with an automatic entropy adjustment mechanism to balance exploration and exploitation, integrates tile coding for improved feature representation, and utilizes DWA to define the action space through parameters such as target heading, obstacle distance, and velocity In this framework, the action space is defined by DWA's three weighting parameters: target heading deviation, distance to the nearest obstacle, and velocity. To facilitate the learning process, a non-sparse reward function is designed, incorporating factors such as Time- to-Collision (TTC), heading, and velocity. To validate the effectiveness of the algorithm, experiments were conducted in four different environments, and the algorithm was evaluated based on metrics such as trajectory deviation, smoothness, and time to reach the end point. The results demonstrate that ASAC outperforms existing algorithms in terms of trajectory smoothness, arrival time, and overall adaptability across various scenarios, effectively enabling path planning in dynamic environments.
引用
收藏
页数:16
相关论文
共 50 条
[1]   Hierarchical control of traffic signals using Q-learning with tile coding [J].
Abdoos, Monireh ;
Mozayani, Nasser ;
Bazzan, Ana L. C. .
APPLIED INTELLIGENCE, 2014, 40 (02) :201-213
[2]   Reinforcement learning for True Adaptive traffic signal control [J].
Abdulhai, B ;
Pringle, R ;
Karakoulas, GJ .
JOURNAL OF TRANSPORTATION ENGINEERING, 2003, 129 (03) :278-285
[3]  
Arulkumaran K, 2017, Arxiv, DOI arXiv:1708.05866
[4]   Hierarchical control of goal-directed action in the cortical-basal ganglia network [J].
Balleine, Bernard W. ;
Dezfouli, Amir ;
Ito, Makato ;
Doya, Kenji .
CURRENT OPINION IN BEHAVIORAL SCIENCES, 2015, 5 :1-7
[5]   A comprehensive survey of multiagent reinforcement learning [J].
Busoniu, Lucian ;
Babuska, Robert ;
De Schutter, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02) :156-172
[6]   A deep reinforcement learning based method for real-time path planning and dynamic obstacle avoidance [J].
Chen, Pengzhan ;
Pei, Jiean ;
Lu, Weiqing ;
Li, Mingzhen .
NEUROCOMPUTING, 2022, 497 :64-75
[7]   Deep Reinforcement Learning in Maximum Entropy Framework with Automatic Adjustment of Mixed Temperature Parameters for Path Planning [J].
Chen, Yingying ;
Ying, Fengkang ;
Li, Xiangjian ;
Liu, Huashan .
2023 7TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION, ICRCA, 2023, :78-82
[8]  
Ghiassian S, 2020, Arxiv, DOI [arXiv:2003.07417, 10.48550/arxiv.2003.07417]
[9]   Time-Efficient A* Algorithm for Robot Path Planning [J].
Guruji, Akshay Kumar ;
Agarwal, Himansh ;
Parsediya, D. K. .
3RD INTERNATIONAL CONFERENCE ON INNOVATIONS IN AUTOMATION AND MECHATRONICS ENGINEERING 2016, ICIAME 2016, 2016, 23 :144-149
[10]  
Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905