An Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay for Path Planning of Unmanned Surface Vehicles

被引:14
作者
Zhu, Zhengwei [1 ]
Hu, Can [1 ]
Zhu, Chenyang [2 ]
Zhu, Yanping [1 ]
Sheng, Yu [1 ]
机构
[1] Changzhou Univ, Sch Microelect & Control Engn, Changzhou 213164, Jiangsu, Peoples R China
[2] Changzhou Univ, Sch Comp Sci & Artificial Intelligence, Changzhou 213164, Jiangsu, Peoples R China
关键词
deep reinforcement learning; unmanned surface vehicle; path planning; algorithm optimization; fusion and integration;
D O I
10.3390/jmse9111267
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Unmanned Surface Vehicle (USV) has a broad application prospect and autonomous path planning as its crucial technology has developed into a hot research direction in the field of USV research. This paper proposes an Improved Dueling Deep Double-Q Network Based on Prioritized Experience Replay (IPD3QN) to address the slow and unstable convergence of traditional Deep Q Network (DQN) algorithms in autonomous path planning of USV. Firstly, we use the deep double Q-Network to decouple the selection and calculation of the target Q value action to eliminate overestimation. The prioritized experience replay method is adopted to extract experience samples from the experience replay unit, increase the utilization rate of actual samples, and accelerate the training speed of the neural network. Then, the neural network is optimized by introducing a dueling network structure. Finally, the soft update method is used to improve the stability of the algorithm, and the dynamic epsilon-greedy method is used to find the optimal strategy. The experiments are first conducted in the Open AI Gym test platform to pre-validate the algorithm for two classical control problems: the Cart pole and Mountain Car problems. The impact of algorithm hyperparameters on the model performance is analyzed in detail. The algorithm is then validated in the Maze environment. The comparative analysis of simulation experiments shows that IPD3QN has a significant improvement in learning performance regarding convergence speed and convergence stability compared with DQN, D3QN, PD2QN, PDQN, PD3QN. Also, USV can plan the optimal path according to the actual navigation environment with the IPD3QN algorithm.
引用
收藏
页数:15
相关论文
共 26 条
  • [1] Multi-Robot Path Planning Method Using Reinforcement Learning
    Bae, Hyansu
    Kim, Gidong
    Kim, Jonguk
    Qian, Dianwei
    Lee, Sukgyu
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (15):
  • [2] NEURONLIKE ADAPTIVE ELEMENTS THAT CAN SOLVE DIFFICULT LEARNING CONTROL-PROBLEMS
    BARTO, AG
    SUTTON, RS
    ANDERSON, CW
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05): : 834 - 846
  • [3] Cannon R. H., 2003, DYNAMICS PHYS SYSTEM
  • [4] A knowledge-free path planning approach for smart ships based on reinforcement learning
    Chen, Chen
    Chen, Xian-Qiao
    Ma, Feng
    Zeng, Xiao-Jun
    Wang, Jin
    [J]. OCEAN ENGINEERING, 2019, 189
  • [5] Cordero A. H., 2016, EXTENDING OPENAI GYM
  • [6] Etemad Mohammad, 2020, Advances in Artificial Intelligence. 33rd Canadian Conference on Artificial Intelligence, Canadian AI 2020. Proceedings. Lecture Notes in Artificial Intelligence. Subseries of Lecture Notes in Computer Science (LNAI 12109), P220, DOI 10.1007/978-3-030-47358-7_21
  • [7] Understanding dopamine and reinforcement learning: The dopamine reward prediction error hypothesis
    Glimcher, Paul W.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2011, 108 : 15647 - 15654
  • [8] Holen Martin, 2020, ICISS 2020: Proceedings of the 2020 The 3rd International Conference on Information Science and System, P67, DOI 10.1145/3388176.3388199
  • [9] Hou YN, 2017, IEEE SYS MAN CYBERN, P316, DOI 10.1109/SMC.2017.8122622
  • [10] Reinforcement learning in robotics: A survey
    Kober, Jens
    Bagnell, J. Andrew
    Peters, Jan
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) : 1238 - 1274