Reinforcement learning-driven dynamic obstacle avoidance for mobile robot trajectory tracking

被引:4
|
作者
Xiao, Hanzhen [1 ]
Chen, Canghao [1 ]
Zhang, Guidong [1 ]
Chen, C. L. Philip [2 ,3 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[3] Pazhou Lab, Ctr Affect Comp & Gen Models, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Obstacle avoidance; Q-Learning; Trajectory tracking; Mobile robot; NAVIGATION;
D O I
10.1016/j.knosys.2024.111974
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a trajectory tracking method based on optimized Q-Learning (QL), which has realtime obstacle avoidance capability, for controlling wheeled mobile robots in dynamic local environments. Based on the observation data and the state of the robot, the designed reinforcement learning (RL) method can determine the obstacle avoidance action during trajectory tracking while simultaneously utilizing controllers to maintain action precision. Through a simple observation space data processing method (OSDPM), the inputting data from the equipped raw lidar is transformed into a dimensionality reduction index vector containing the surrounding environment information of the mobile robot, which can guide QL to quickly correspond the current observation state of the robot to the table state of the QL. To improve the iteration and decision efficiency of the RL method, we optimize the Q -Table structure based on the type of data used. Finally, the simulation results verify the effectiveness of the OSDPM and the obstacle avoidance ability of RL method in unknown local environment.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Optimal Trajectory Planning of a Mobile Robot with Spatial Manipulator For Obstacle Avoidance
    Mostafa, Shariati Nia
    Mostafa, Ghayour
    Masoud, Mosayebi
    INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2010), 2010, : 314 - 318
  • [32] Reinforcement Learning with Dynamic Movement Primitives for Obstacle Avoidance
    Li, Ang
    Liu, Zhenze
    Wang, Wenrui
    Zhu, Mingchao
    Li, Yanhui
    Huo, Qi
    Dai, Ming
    APPLIED SCIENCES-BASEL, 2021, 11 (23):
  • [33] Experimental Validation of an Intelligent Obstacle Avoidance Algorithm with an Omnidirectional Mobile Robot for Dynamic Obstacle Avoidance
    Hindistan, Cagri
    Selim, Erman
    Tatlicioglu, Enver
    IFAC PAPERSONLINE, 2024, 58 (30): : 103 - 108
  • [34] Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning
    Tang, Yaokun
    Chen, Qingyu
    Wei, Yuxin
    JOURNAL OF SENSORS, 2022, 2022
  • [35] Robot obstacle avoidance system using deep reinforcement learning
    Zhu, Xiaojun
    Liang, Yinghao
    Sun, Hanxu
    Wang, Xueqian
    Ren, Bin
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2022, 49 (02): : 301 - 310
  • [36] Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning
    Tang, Yaokun
    Chen, Qingyu
    Wei, Yuxin
    Journal of Sensors, 2022, 2022
  • [37] Dynamic Trajectory Tracking Control of Mobile Robot
    Fan, Longtao
    Zhang, Yuanheng
    Zhang, Sen
    2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2018), 2018, : 728 - 732
  • [38] Dynamic obstacle avoidance of a mobile robot using AR markers
    Mori, Yusuke
    Izumi, Kiyotaka
    Tsujimura, Takeshi
    2023 62ND ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS, SICE, 2023, : 1442 - 1447
  • [39] Fuzzy Reinforcement Learning Based Trajectory-tracking Control of an Autonomous Mobile Robot
    Zaman, Muhammad Qomaruz
    Wu, Hsiu-Ming
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 840 - 845
  • [40] Dynamic Obstacle Avoidance for Cable-Driven Parallel Robots With Mobile Bases via Sim-to-Real Reinforcement Learning
    Liu, Yuming
    Cao, Zhihao
    Xiong, Hao
    Du, Junfeng
    Cao, Huanhui
    Zhang, Lin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (03) : 1683 - 1690