Obstacle Avoidance Based on Deep Reinforcement Learning and Artificial Potential Field

被引:3
作者
Han, Haoran [1 ]
Xi, Zhilong [1 ]
Cheng, Jian [1 ]
Lv, Maolong [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu, Peoples R China
[2] Air Force Engn Univ, Air Traff Control & Nav Coll, Xian, Peoples R China
来源
2023 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS, ICCAR | 2023年
关键词
obstacle avoidance; deep reinforcement learning (DRL); artificial potential field (APF);
D O I
10.1109/ICCAR57134.2023.10151771
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Obstacle avoidance is an essential part of mobile robot path planning, since it ensures the safety of automatic control. This paper proposes an obstacle avoidance algorithm that combines artificial potential field with deep reinforcement learning (DRL). State regulation is presented so that the pre-defined velocity constraint could be satisfied. To guarantee the isotropy of the robot controller as well as reduce training complexity, coordinate transformation into normal direction and tangent direction is introduced, making it possible to use one-dimension controllers to work in a two-dimension task. Artificial potential field (APF) is modified such that the obstacle directly affects the intermediate target positions instead of the control commands, which can well be used to guide the previously trained one-dimension DRL controller. Experiment results show that the proposed algorithm successfully achieved obstacle avoidance tasks in single-agent and multi-agent scenarios.
引用
收藏
页码:215 / 220
页数:6
相关论文
共 50 条
  • [31] Adaptive Artificial Potential Field Approach for Obstacle Avoidance of Unmanned Aircrafts
    Rezaee, Hamed
    Abdollahi, Farzaneh
    2012 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2012,
  • [32] Adaptive artificial potential field approach for obstacle avoidance path planning
    Zhou, Li
    Li, Wei
    2014 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2014), VOL 2, 2014,
  • [33] Optimization of Obstacle Avoidance Using Reinforcement Learning
    Kominami, Keishi
    Takubo, Tomohito
    Ohara, Kenichi
    Mae, Yasushi
    Arai, Tatsuo
    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2012, : 67 - 72
  • [34] An Obstacle Avoidance Method Using Asynchronous Policy-based Deep Reinforcement Learning with Discrete Action
    Wang, Yuechuan
    Yao, Fenxi
    Cui, Lingguo
    Chai, Senchun
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 6235 - 6241
  • [35] A human-like collision avoidance method for USVs based on deep reinforcement learning and velocity obstacle
    Yang, Xiaofei
    Lou, Mengmeng
    Hu, Jiabao
    Ye, Hui
    Zhu, Zhiyu
    Shen, Hao
    Xiang, Zhengrong
    Zhang, Bin
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 254
  • [36] Obstacle Avoidance Path Planning of Space Manipulator Based on Improved Artificial Potential Field Method
    Liu S.
    Zhang Q.
    Zhou D.
    Zhang, Q. (zhangq30@yahoo.com), 1600, Springer (95): : 31 - 39
  • [37] Mechanical arm obstacle avoidance path planning based on improved artificial potential field method
    Xu, Tianying
    Zhou, Haibo
    Tan, Shuaixia
    Li, Zhiqiang
    Ju, Xia
    Peng, Yichang
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2022, 49 (02): : 271 - 279
  • [38] Artificial Potential Field APF-based Obstacle Avoidance Technique for Robot Arm Teleoperation
    Elahres, Mustafa
    Abbes, Manel
    Fonte, Aicha
    Poisson, Gerard
    2023 27TH INTERNATIONAL CONFERENCE ON METHODS AND MODELS IN AUTOMATION AND ROBOTICS, MMAR, 2023, : 222 - 227
  • [39] Neural networks based reinforcement learning for mobile robots obstacle avoidance
    Duguleana, Mihai
    Mogan, Gheorghe
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 62 : 104 - 115
  • [40] Obstacle avoidance method of mobile robot based on obstacle cost potential field
    Chi S.
    Xie Y.
    Chen X.
    Peng F.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2022, 48 (11): : 2289 - 2303