Optimized Backstepping Consensus Control Using Reinforcement Learning for a Class of Nonlinear Strict-Feedback-Dynamic Multi-Agent Systems

被引:67
|
作者
Wen, Guoxing [1 ,2 ]
Chen, C. L. Philip [3 ,4 ]
机构
[1] Binzhou Univ, Coll Sci, Binzhou 256600, Peoples R China
[2] Qilu Univ Technol, Sch Math & Stat, Jinan 250353, Peoples R China
[3] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[4] Dalian Maritime Univ, Nav Coll, Dalian 116026, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimal control; Backstepping; Artificial neural networks; Performance analysis; Nonlinear dynamical systems; Consensus control; Mathematical model; Critic-actor architecture; high-order multi-agent system (MAS); neural network (NN); optimal control; reinforcement learning (RL); ADAPTIVE OPTIMAL-CONTROL; CONTAINMENT CONTROL; TRACKING CONTROL;
D O I
10.1109/TNNLS.2021.3105548
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, an optimized leader-following consensus control scheme is proposed for the nonlinear strict-feedback-dynamic multi-agent system by learning from the controlling idea of optimized backstepping technique, which designs the virtual and actual controls of backstepping to be the optimized solution of corresponding subsystems so that the entire backstepping control is optimized. Since this control needs to not only ensure the optimizing system performance but also synchronize the multiple system state variables, it is an interesting and challenging topic. In order to achieve this optimized control, the neural network approximation-based reinforcement learning (RL) is performed under critic-actor architecture. In most of the existing RL-based optimal controls, since both the critic and actor RL updating laws are derived from the negative gradient of square of the Hamilton-Jacobi-Bellman (HJB) equation's approximation, which contains multiple nonlinear terms, their algorithm are inevitably intricate. However, the proposed optimized control derives the RL updating laws from the negative gradient of a simple positive function, which is correlated with the HJB equation; hence, it can be significantly simple in the algorithm. Meanwhile, it can also release two general conditions, known dynamic and persistence excitation, which are required in most of the RL-based optimal controls. Therefore, the proposed optimized scheme can be a natural selection for the high-order nonlinear multi-agent control. Finally, the effectiveness is demonstrated by both theory and simulation.
引用
收藏
页码:1524 / 1536
页数:13
相关论文
共 50 条
  • [1] Simplified Optimized Backstepping Control for a Class of Nonlinear Strict-Feedback Systems With Unknown Dynamic Functions
    Wen, Guoxing
    Chen, C. L. Philip
    Ge, Shuzhi Sam
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (09) : 4567 - 4580
  • [2] Optimized Backstepping Tracking Control Using Reinforcement Learning for a Class of Stochastic Nonlinear Strict-Feedback Systems
    Wen, Guoxing
    Xu, Liguang
    Li, Bin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1291 - 1303
  • [3] Game-Based Backstepping Design for Strict-Feedback Nonlinear Multi-Agent Systems Based on Reinforcement Learning
    Long, Jia
    Yu, Dengxiu
    Wen, Guoxing
    Li, Li
    Wang, Zhen
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 817 - 830
  • [4] Optimized Backstepping Control Using Reinforcement Learning of Observer-Critic-Actor Architecture Based on Fuzzy System for a Class of Nonlinear Strict-Feedback Systems
    Wen, Guoxing
    Li, Bin
    Niu, Ben
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2022, 30 (10) : 4322 - 4335
  • [5] Optimized backstepping consensus control using adaptive observer-critic-actor reinforcement learning for strict-feedback multi-agent systems
    Zhu, Jiahao
    Wen, Guoxing
    Veluvolu, Kalyana C.
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2024, 361 (06):
  • [6] Reinforcement learning-based optimized backstepping control of nonlinear strict feedback system with unknown control gain function
    Zhou, Ranran
    Wen, Guoxing
    Li, Bin
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2022, 43 (05) : 1358 - 1378
  • [7] Optimized Leader-Follower Consensus Control of Multi-QUAV Attitude System Using Reinforcement Learning and Backstepping
    Wen, Guoxing
    Song, Yanfen
    Li, Zijun
    Li, Bin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (02): : 1469 - 1479
  • [8] Event-Triggered Consensus Control of Nonlinear Strict Feedback Multi-Agent Systems
    Zhuang, Jiaojiao
    Li, Zhenxing
    Hou, Zongxiang
    Yang, Chengdong
    MATHEMATICS, 2022, 10 (09)
  • [9] Adaptive consensus tracking control of strict-feedback nonlinear multi-agent systems with unknown dynamic leader
    Cui, Yang
    Liu, Xiaoping
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (08) : 6215 - 6226
  • [10] Adaptive Tracking Control for Perturbed Strict-Feedback Nonlinear Systems Based on Optimized Backstepping Technique
    Liu, Yongchao
    Zhu, Qidan
    Wen, Guoxing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) : 853 - 865