Obstacle Avoidance Based on Deep Reinforcement Learning and Artificial Potential Field

被引:4
作者
Han, Haoran [1 ]
Xi, Zhilong [1 ]
Cheng, Jian [1 ]
Lv, Maolong [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu, Peoples R China
[2] Air Force Engn Univ, Air Traff Control & Nav Coll, Xian, Peoples R China
来源
2023 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS, ICCAR | 2023年
关键词
obstacle avoidance; deep reinforcement learning (DRL); artificial potential field (APF);
D O I
10.1109/ICCAR57134.2023.10151771
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Obstacle avoidance is an essential part of mobile robot path planning, since it ensures the safety of automatic control. This paper proposes an obstacle avoidance algorithm that combines artificial potential field with deep reinforcement learning (DRL). State regulation is presented so that the pre-defined velocity constraint could be satisfied. To guarantee the isotropy of the robot controller as well as reduce training complexity, coordinate transformation into normal direction and tangent direction is introduced, making it possible to use one-dimension controllers to work in a two-dimension task. Artificial potential field (APF) is modified such that the obstacle directly affects the intermediate target positions instead of the control commands, which can well be used to guide the previously trained one-dimension DRL controller. Experiment results show that the proposed algorithm successfully achieved obstacle avoidance tasks in single-agent and multi-agent scenarios.
引用
收藏
页码:215 / 220
页数:6
相关论文
共 17 条
[1]   Path Planning and Obstacle Avoiding of the USV Based on Improved ACO-APF Hybrid Algorithm With Adaptive Early-Warning [J].
Chen, Yanli ;
Bai, Guiqiang ;
Zhan, Yin ;
Hu, Xinyu ;
Liu, Jun .
IEEE ACCESS, 2021, 9 :40728-40742
[2]  
Chiang HT, 2015, IEEE INT CONF ROBOT, P2347, DOI 10.1109/ICRA.2015.7139511
[3]   RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators From RL Policies [J].
Chiang, Hao-Tien Lewis ;
Hsu, Jasmine ;
Fiser, Marek ;
Tapia, Lydia ;
Faust, Aleksandra .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) :4298-4305
[4]  
Chintala P., 2022, INT J MECH ENG ROBOT, V11, P373
[5]   Challenges of real-world reinforcement learning: definitions, benchmarks and analysis [J].
Dulac-Arnold, Gabriel ;
Levine, Nir ;
Mankowitz, Daniel J. ;
Li, Jerry ;
Paduraru, Cosmin ;
Gowal, Sven ;
Hester, Todd .
MACHINE LEARNING, 2021, 110 (09) :2419-2468
[6]   Long-Range Indoor Navigation With PRM-RL [J].
Francis, Anthony ;
Faust, Aleksandra ;
Chiang, Hao-Tien ;
Hsu, Jasmine ;
Kew, J. Chase ;
Fiser, Marek ;
Lee, Tsang-Wei Edward .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (04) :1115-1134
[7]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[8]   Cascade Flight Control of Quadrotors Based on Deep Reinforcement Learning [J].
Han, Haoran ;
Cheng, Jian ;
Xi, Zhilong ;
Yao, Bingcai .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) :11134-11141
[9]   A FORMAL BASIS FOR HEURISTIC DETERMINATION OF MINIMUM COST PATHS [J].
HART, PE ;
NILSSON, NJ ;
RAPHAEL, B .
IEEE TRANSACTIONS ON SYSTEMS SCIENCE AND CYBERNETICS, 1968, SSC4 (02) :100-+
[10]   A Dynamic Artificial Potential Field (D-APF) UAV Path Planning Technique for Following Ground Moving Targets [J].
Jayaweera, Herath M. P. C. ;
Hanoun, Samer .
IEEE ACCESS, 2020, 8 :192760-192776