Distributed cooperative H∞ optimal control of underactuated autonomous underwater vehicles based on reinforcement learning and prescribed performance

被引:2
作者
Zhuo, Jiaoyang [1 ,2 ]
Tian, Xuehong [1 ,2 ,3 ]
Liu, Haitao [1 ,2 ,3 ]
机构
[1] Guangdong Ocean Univ, Sch Mech Engn, Zhanjiang 524088, Peoples R China
[2] Guangdong Ocean Univ, Shenzhen Inst, Shenzhen 518120, Peoples R China
[3] Guangdong Engn Technol Res Ctr Ocean Equipment & M, Zhanjiang 524088, Peoples R China
关键词
Underactuated autonomous underwater vehicle; Optimal control; Trajectory tracking; Prescribed performance control; Reinforcement learning; H-infinity control; TRACKING CONTROL;
D O I
10.1016/j.oceaneng.2024.119323
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
To balance energy resources and control performance, an H-infinity optimal control method based on prescribed performance control (PPC) and a reinforcement learning (RL) algorithm with actor-critic mechanisms for distributed cooperative control is proposed for multiple five-degree-of-freedom underactuated autonomous underwater vehicles (AUVs) with unknown uncertainty disturbances. First, an optimal control strategy combining PPC is proposed to achieve optimal control of a cooperative system while ensuring that the error always stays within the prescribed boundary. Second, to suppress uncertainty disturbances, H-infinity control methods are proposed to improve the robustness of the system. Achieving H-infinity optimal control requires solving the Hamilton-Jacobi-Bellman (HJB) equation, but the inherent nonlinearity of the HJB equation makes it difficult to solve. Therefore, an adaptive approximation strategy incorporating an online RL method with an actor-critic architecture is used to solve the above problem, which dynamically adjusts the control strategy to ensure system control performance through the environment assessment-feedback approach. In addition, a distributed adaptive state observer is proposed to obtain information about the virtual leader for each agent so that leader information can be accurately obtained, even if the agent communicates only with neighboring agents. Using the above control method, all errors of the formation system are proven to be uniform and ultimately bounded according to Lyapunov's stability theorem. Finally, a numerical simulation is performed to further demonstrate the effectiveness and feasibility of the proposed method.
引用
收藏
页数:16
相关论文
共 49 条
[1]   Robust fixed-time tracking control for underactuated AUVs based on fixed-time disturbance observer [J].
An, Shun ;
Wang, Longjin ;
He, Yan .
OCEAN ENGINEERING, 2022, 266
[2]   The Bipartite Edge-Based Event-Triggered Output Tracking of Heterogeneous Linear Multiagent Systems [J].
Cai, Yuliang ;
Zhang, Huaguang ;
Su, Hanguang ;
Zhang, Juan ;
He, Qiang .
IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (02) :967-978
[3]   Neural-network estimators based fault-tolerant tracking control for AUV via ADP with rudders faults and ocean current disturbance [J].
Che, Gaofeng ;
Yu, Zhen .
NEUROCOMPUTING, 2020, 411 :442-454
[4]   Approximate Optimal Adaptive Prescribed Performance Control for Uncertain Nonlinear Systems With Feature Information [J].
Chen, Guangjun ;
Dong, Jiuxiang .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (04) :2298-2308
[5]   Reinforcement learning-based close formation control for underactuated surface vehicle with prescribed performance and time-varying state constraints [J].
Chen, Huizi ;
Yan, Huaicheng ;
Wang, Yueying ;
Xie, Shaorong ;
Zhang, Dan .
OCEAN ENGINEERING, 2022, 256
[6]   Adaptive Optimal Tracking Control of an Underactuated Surface Vessel Using Actor-Critic Reinforcement Learning [J].
Chen, Lin ;
Dai, Shi-Lu ;
Dong, Chao .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) :7520-7533
[7]   Reinforcement-learning-based fixed-time attitude consensus control for multiple spacecraft systems with model uncertainties [J].
Chen, Run-Ze ;
Li, Yuan-Xin ;
Ahn, Choon Ki .
AEROSPACE SCIENCE AND TECHNOLOGY, 2023, 132
[8]   Distributed Edge-Based Event-Triggered Formation Control [J].
Cheng, Bin ;
Wu, Zizhen ;
Li, Zhongkui .
IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) :1241-1252
[9]   Reinforcement learning based model-free optimized trajectory tracking strategy design for an AUV [J].
Duan, Kairong ;
Fong, Simon ;
Chen, C. L. Philip .
NEUROCOMPUTING, 2022, 469 :289-297
[10]   Reinforcement learning-based saturated adaptive robust neural-network control of underactuated autonomous underwater vehicles [J].
Elhaki, Omid ;
Shojaei, Khoshnam ;
Mehrmohammadi, Parisa .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 197