Deep hierarchical reinforcement learning based formation planning for multiple unmanned surface vehicles with experimental results

被引:17
|
作者
Wei, Xiangwei [1 ]
Wang, Hao [1 ]
Tang, Yixuan [1 ]
机构
[1] Northeastern Univ, Fac Robot Sci & Engn, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Hierarchical reinforcement learning; Artificial potential field; Formation control; Unmanned surface vehicles; CONTROLLER;
D O I
10.1016/j.oceaneng.2023.115577
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
In this paper, a novel multi-USV formation path planning algorithm is proposed based on deep reinforcement learning. First, a goal-based hierarchical reinforcement learning algorithm is designed to improve training speed and resolve planning conflicts within the formation. Second, an improved artificial potential field algorithm is designed in the training process to obtain the optimal path planning and obstacle avoidance learning scheme for multi-USVs in the determined perceptual environment. Finally, a formation geometry model is established to describe the physical relationships among USVs, and a composite reward function is proposed to guide the training. Numerous simulation tests are conducted, and the effectiveness of the proposed algorithm are further validated through the NEU-MSV01 experimental platform with a combination of parameterized Line of Sight (LOS) guidance.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Flexible unmanned surface vehicles control using probabilistic model-based reinforcement learning with hierarchical Gaussian distribution
    Cui, Yunduan
    Xu, Kun
    Zheng, Chunhua
    Liu, Jia
    Peng, Lei
    Li, Huiyun
    OCEAN ENGINEERING, 2023, 285
  • [22] An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning
    Guo, Siyu
    Zhang, Xiuguo
    Zheng, Yisong
    Du, Yiquan
    SENSORS, 2020, 20 (02)
  • [23] Risk-aware deep reinforcement learning for mapless navigation of unmanned surface vehicles in uncertain and congested environments
    Wu, Xiangyu
    Wei, Changyun
    Guan, Dawei
    Ji, Ze
    OCEAN ENGINEERING, 2025, 322
  • [24] Autonomous Obstacle Avoidance Algorithm for Unmanned Aerial Vehicles Based on Deep Reinforcement Learning
    Gao, Yuan
    Ren, Ling
    Shi, Tianwei
    Xu, Teng
    Ding, Jianbang
    ENGINEERING LETTERS, 2024, 32 (03) : 650 - 660
  • [25] Fast and Accurate Trajectory Tracking for Unmanned Aerial Vehicles based on Deep Reinforcement Learning
    Li, Yilan
    Li, Hongjia
    Li, Zhe
    Fang, Haowen
    Sanyal, Amit K.
    Wang, Yanzhi
    Qiu, Qinru
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON EMBEDDED AND REAL-TIME COMPUTING SYSTEMS AND APPLICATIONS (RTCSA 2019), 2019,
  • [26] A Beam Tracking Scheme Based on Deep Reinforcement Learning for Multiple Vehicles
    Cheng, Binyao
    Zhao, Long
    He, Zibo
    Zhang, Ping
    COMMUNICATIONS AND NETWORKING (CHINACOM 2021), 2022, : 291 - 305
  • [27] A survey of formation control and motion planning of multiple unmanned vehicles
    Liu, Yuanchang
    Bucknall, Richard
    ROBOTICA, 2018, 36 (07) : 1019 - 1047
  • [28] Formation control scheme with reinforcement learning strategy for a group of multiple surface vehicles
    Nguyen, Khai
    Dang, Van Trong
    Pham, Dinh Duong
    Dao, Phuong Nam
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (03) : 2252 - 2279
  • [29] Deep Reinforcement Learning Based Computation Offloading in Heterogeneous MEC Assisted by Ground Vehicles and Unmanned Aerial Vehicles
    He, Hang
    Ren, Tao
    Cui, Meng
    Liu, Dong
    Niu, Jianwei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 481 - 494
  • [30] Target tracking control for Unmanned Surface Vehicles: An end-to-end deep reinforcement learning approach
    Wang, Zihao
    Hu, Qiyuan
    Wang, Chao
    Liu, Yi
    Xie, Wenbo
    OCEAN ENGINEERING, 2025, 317