Adaptive Optimal Surrounding Control of Multiple Unmanned Surface Vessels via Actor-Critic Reinforcement Learning

被引:1
作者
Lu, Renzhi [1 ,2 ,3 ,4 ]
Wang, Xiaotao [5 ]
Ding, Yiyu [5 ]
Zhang, Hai-Tao [6 ,7 ]
Zhao, Feng [8 ]
Zhu, Lijun [9 ]
He, Yong [10 ,11 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Peoples R China
[2] Minist Educ, Key Lab Ind Internet Things & Networked Control, Chongqing 400065, Peoples R China
[3] Chongqing Univ, State Key Laboratoryof Mech Transmiss Adv Equipme, Chongqing 400044, Peoples R China
[4] Hubei Key Lab Adv Control & Intelligent Automat C, Wuhan 430074, Peoples R China
[5] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
[6] Huazhong Univ Sci & Technol, Inst Artificial Intelligence, MOE Engn Res Ctr Autonomous Intelligent Unmanned, Sch Artificial Intelligence & Automat,State Key L, Wuhan 430074, Peoples R China
[7] Guangdong HUST Ind Technol Res Inst, Guangdong Prov Engn Technol Res Ctr Autonomous Un, Dongguan 523808, Peoples R China
[8] China Ship Sci Res Ctr, Wuxi 214082, Peoples R China
[9] Huazhong Univ Sci & Technol, MOE Engn Res Ctr Autonomous Intelligent Unmanned, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
[10] China Univ Geosci, Sch Automat, Hubei Key Lab Adv Control & Intelligent Automat, Wuhan 430074, Peoples R China
[11] China Univ Geosci, Minist Educ, Engn Res Ctr Intelligent Technol Geoexplorat, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Actor-critic networks; Lyapunov functions; reinforcement learning (RL); surrounding control; unmanned surface vessels (USVs); MULTIAGENT SYSTEMS; AVOIDANCE;
D O I
10.1109/TNNLS.2024.3474289
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, an optimal surrounding control algorithm is proposed for multiple unmanned surface vessels (USVs), in which actor-critic reinforcement learning (RL) is utilized to optimize the merging process. Specifically, the multiple-USV optimal surrounding control problem is first transformed into the Hamilton-Jacobi-Bellman (HJB) equation, which is difficult to solve due to its nonlinearity. An adaptive actor-critic RL control paradigm is then proposed to obtain the optimal surround strategy, wherein the Bellman residual error is utilized to construct the network update laws. Particularly, a virtual controller representing intermediate transitions and an actual controller operating on a dynamics model are employed as surrounding control solutions for second-order USVs; thus, optimal surrounding control of the USVs is guaranteed. In addition, the stability of the proposed controller is analyzed by means of Lyapunov theory functions. Finally, numerical simulation results demonstrate that the proposed actor-critic RL-based surrounding controller can achieve the surrounding objective while optimizing the evolution process and obtains 9.76% and 20.85% reduction in trajectory length and energy consumption compared with the existing controller.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Adaptive Optimal Tracking Control of an Underactuated Surface Vessel Using Actor-Critic Reinforcement Learning
    Chen, Lin
    Dai, Shi-Lu
    Dong, Chao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7520 - 7533
  • [2] Optimal Elevator Group Control via Deep Asynchronous Actor-Critic Learning
    Wei, Qinglai
    Wang, Lingxiao
    Liu, Yu
    Polycarpou, Marios M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5245 - 5256
  • [3] Fractional-Order Systems Optimal Control via Actor-Critic Reinforcement Learning and Its Validation for Chaotic MFET
    Li, Dongdong
    Dong, Jiuxiang
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, : 1173 - 1182
  • [4] Actor-critic reinforcement learning for bidding in bilateral negotiation
    Arslan, Furkan
    Aydogan, Reyhan
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (05) : 1695 - 1714
  • [5] Improving Actor-Critic Reinforcement Learning via Hamiltonian Monte Carlo Method
    Xu D.
    Fekri F.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (06): : 1642 - 1653
  • [6] Adaptive Optimal Tracking Control for Uncertain Unmanned Surface Vessel via Reinforcement Learning
    Chen, Lin
    Wang, Min
    Dai, Shi-Lu
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 8398 - 8403
  • [7] Efficient Model Learning Methods for Actor-Critic Control
    Grondman, Ivo
    Vaandrager, Maarten
    Busoniu, Lucian
    Babuska, Robert
    Schuitema, Erik
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (03): : 591 - 602
  • [8] A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients
    Grondman, Ivo
    Busoniu, Lucian
    Lopes, Gabriel A. D.
    Babuska, Robert
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06): : 1291 - 1307
  • [9] Design of Observer-Based Control With Residual Generator Using Actor-Critic Reinforcement Learning
    Qian L.
    Zhao X.
    Liu P.
    Zhang Z.
    Lv Y.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (04): : 734 - 743
  • [10] Enhancing 5G Network Slicing: Slice Isolation via Actor-Critic Reinforcement Learning with Optimal Graph Features
    Javadpour, Amir
    Ja'fari, Forough
    Taleb, Tarik
    Benzaid, Chafika
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 31 - 37