Deep-Reinforcement-Learning-Based Path Planning for Industrial Robots Using Distance Sensors as Observation

被引:6
作者
Bhuiyan, Teham [1 ]
Kaestner, Linh [1 ]
Hu, Yifan [1 ]
Kutschank, Benno [1 ]
Lambrecht, Jens [1 ]
机构
[1] Berlin Inst Technol, Chair Ind Grade Networks & Clouds, Fac Elect Engn & Comp Sci, Berlin, Germany
来源
2023 8TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS ENGINEERING, ICCRE | 2023年
关键词
Deep Reinforcement Learning; Path Planning; Industrial Robots; Automation;
D O I
10.1109/ICCRE57112.2023.10155608
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms require long computation times, especially in complex environments. Furthermore, the environment in which they are employed needs to be known beforehand. When utilizing these approaches in new environments, a tedious engineering effort in setting hyperparameters needs to be conducted, which is time- and cost-intensive. On the other hand, DRL (Deep Reinforcement Learning) has shown remarkable results in dealing with complex environments, generalizing new problem instances, and solving motion planning problems efficiently. On that account, this paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We propose an easily reproducible method to train an agent in randomized scenarios achieving generalization for unknown environments. We evaluated our model against state-of-the-art sampling- and DRL-based planners in several experiments containing static and dynamic obstacles. Results show the adaptability of our agent in new environments and the superiority in terms of path length and execution time compared to conventional methods. Our code is available on GitHub [1].
引用
收藏
页码:204 / 210
页数:7
相关论文
共 30 条
  • [1] [Anonymous], IR DRL REP CONT COD
  • [2] [Anonymous], BULLET3
  • [3] Learning Navigation Behaviors End-to-End With AutoRL
    Chiang, Hao-Tien Lewis
    Faust, Aleksandra
    Fiser, Marek
    Francis, Anthony
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (02) : 2007 - 2014
  • [4] Chintala P., 2022, INT J MECH ENG ROBOT
  • [5] Daniel D., 2020, ARXIV201204406
  • [6] Dong X., 2020, Appl. Sci.
  • [7] NavRep: Unsupervised Representations for Reinforcement Learning of Robot Navigation in Dynamic Human Environments
    Dugas, Daniel
    Nieto, Juan
    Siegwart, Roland
    Chung, Jen Jen
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 7829 - 7835
  • [8] Faust A, 2018, IEEE INT CONF ROBOT, P5113
  • [9] All-in-One: A DRL-based Control Switch Combining State-of-the-art Navigation Planners
    Kaestner, Linh
    Cox, Johannes
    Buiyan, Teham
    Lambrecht, Jens
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2861 - 2867
  • [10] Arena-Rosnav: Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle Avoidance into Conventional Autonomous Navigation Systems
    Kaestner, Linh
    Buiyan, Teham
    Jiao, Lei
    Tuan Anh Le
    Zhao, Xinlin
    Shen, Zhengcheng
    Lambrecht, Jens
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 6456 - 6463