Representation Learning and Reinforcement Learning for Dynamic Complex Motion Planning System

被引:3
|
作者
Zhou, Chengmin [1 ,2 ]
Huang, Bingding [2 ]
Franti, Pasi [1 ]
机构
[1] Univ Eastern Finland, Sch Comp, Machine Learning Grp, Joensuu 80100, Finland
[2] Shenzhen Technol Univ, Coll Big Data & Internet, Shenzhen 518118, Peoples R China
关键词
Intelligent robot; motion planning; reinforcement learning (RL); representation learning;
D O I
10.1109/TNNLS.2023.3247160
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Indoor motion planning challenges researchers because of the high density and unpredictability of moving obstacles. Classical algorithms work well in the case of static obstacles but suffer from collisions in the case of dense and dynamic obstacles. Recent reinforcement learning (RL) algorithms provide safe solutions for multiagent robotic motion planning systems. However, these algorithms face challenges in convergence: slow convergence speed and suboptimal converged result. Inspired by RL and representation learning, we introduced the ALN-DSAC: a hybrid motion planning algorithm where attention-based long short-term memory (LSTM) and novel data replay combine with discrete soft actor-critic (SAC). First, we implemented a discrete SAC algorithm, which is the SAC in the setting of discrete action space. Second, we optimized existing distancebased LSTM encoding by attention-based encoding to improve the data quality. Third, we introduced a novel data replay method by combining the online learning and offline learning to improve the efficacy of data replay. The convergence of our ALN-DSAC outperforms that of the trainable state of the arts. Evaluations demonstrate that our algorithm achieves nearly 100% success with less time to reach the goal in motion planning tasks when compared to the state of the arts. The test code is available at https://github.com/CHUENGMINCHOU/ALN-DSAC.
引用
收藏
页码:11049 / 11063
页数:15
相关论文
共 50 条
  • [31] Learning a Belief Representation for Delayed Reinforcement Learning
    Liotet, Pierre
    Venneri, Erick
    Restelli, Marcello
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [32] Decoupling Representation Learning from Reinforcement Learning
    Stooke, Adam
    Lee, Kimin
    Abbeel, Pieter
    Laskin, Michael
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] LEARNING NETWORK REPRESENTATION THROUGH REINFORCEMENT LEARNING
    Shen, Siqi
    Fu, Yongquan
    Jia, Adele Lu
    Su, Huayou
    Wang, Qinglin
    Wang, Chengsong
    Dou, Yong
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3537 - 3541
  • [34] Masked Contrastive Representation Learning for Reinforcement Learning
    Zhu, Jinhua
    Xia, Yingce
    Wu, Lijun
    Deng, Jiajun
    Zhou, Wengang
    Qin, Tao
    Liu, Tie-Yan
    Li, Houqiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3421 - 3433
  • [35] Representation Learning on Graphs: A Reinforcement Learning Application
    Madjiheurem, Sephora
    Toni, Laura
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [36] State Representation Learning for Task and Motion Planning in Robot Manipulation
    Qu Weiming
    Wei Yaoyao
    Luo Dingsheng
    2023 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, ICDL, 2023, : 93 - 99
  • [37] Unmanned Aerial Vehicle Path Planning in Complex Dynamic Environments Based on Deep Reinforcement Learning
    Liu, Jiandong
    Luo, Wei
    Zhang, Guoqing
    Li, Ruihao
    MACHINES, 2025, 13 (02)
  • [38] Learning to Navigate Through Complex Dynamic Environment With Modular Deep Reinforcement Learning
    Wang, Yuanda
    He, Haibo
    Sun, Changyin
    IEEE TRANSACTIONS ON GAMES, 2018, 10 (04) : 400 - 412
  • [39] Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
    Ha, Jung-Su
    Park, Young-Jin
    Chae, Hyeok-Joo
    Park, Soon-Seo
    Choi, Han-Lim
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4459 - 4466
  • [40] Using Implicit Behavior Cloning and Dynamic Movement Primitive to Facilitate Reinforcement Learning for Robot Motion Planning
    Zhang, Zengjie
    Hong, Jayden
    Enayati, Amir M. Soufi
    Najjaran, Homayoun
    IEEE TRANSACTIONS ON ROBOTICS, 2024, 40 : 4733 - 4749