Consensus of Nonlinear Multiagent Systems With Uncertainties Using Reinforcement Learning Based Sliding Mode Control

被引:39
作者
Li, Jinna [1 ]
Yuan, Lin [1 ]
Chai, Tianyou [2 ]
Lewis, Frank L. [3 ]
机构
[1] Liaoning Petrochem Univ, Sch Informat & Control Engn, Fushun 113001, Peoples R China
[2] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
[3] Univ Texas Arlington, UTA Res Inst, Arlington, TX 76118 USA
基金
中国国家自然科学基金;
关键词
Uncertainty; Delays; Protocols; Sliding mode control; Multi-agent systems; Robustness; Reinforcement learning; distributed consensus control; sliding mode control; reinforcement learning; CONTINUOUS-TIME SYSTEMS; TRACKING CONTROL; GRAPHICAL GAMES; DESIGN; DISTURBANCES;
D O I
10.1109/TCSI.2022.3206102
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper investigates distributed control protocols design for uncertain nonlinear multi-agent systems with the goal of achieving the optimal consensus. The critical challenges encountered when designing the optimal distributed control protocols are mainly caused by the internal coupling of agents, uncertainty and nonlinear dynamics. Communication delay among agents makes overcoming these challenges even more difficult. To this end, a novel sliding mode control design method is developed based on the sliding mode control principle and the reinforcement learning technique. The remarkable highlights of the developed method in this paper include the design of distributed sliding mode controllers and the integrated framework of sliding mode control and reinforcement learning, which bring the outcome of successfully learning the composite distributed control protocols for multi-agent systems. Thus, all agents can successfully eliminate the negative impacts brought by system uncertainties and communication delay among agents, and finally follow the leader with a nearly optimal approach. The reachability of sliding mode surfaces and the optimal consensus are rigorously proven and analyzed. Finally, simulation results illustrate the effectiveness of the developed method.
引用
收藏
页码:424 / 434
页数:11
相关论文
共 36 条
[11]   Adaptive Interleaved Reinforcement Learning: Robust Stability of Affine Nonlinear Systems With Unknown Uncertainty [J].
Li, Jinna ;
Ding, Jinliang ;
Chai, Tianyou ;
Lewis, Frank L. ;
Jagannathan, Sarangapani .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (01) :270-280
[12]   Observer-Based Adaptive Optimized Control for Stochastic Nonlinear Systems With Input and State Constraints [J].
Li, Yongming ;
Zhang, Jiaxin ;
Liu, Wei ;
Tong, Shaocheng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) :7791-7805
[13]   Observer-Based Neuro-Adaptive Optimized Control of Strict-Feedback Nonlinear Systems With State Constraints [J].
Li, Yongming ;
Liu, Yanjun ;
Tong, Shaocheng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) :3131-3145
[14]   Adaptive leader-following consensus control of multi-agent systems using model reference adaptive control approach [J].
Liu, Y. ;
Jia, Y. .
IET CONTROL THEORY AND APPLICATIONS, 2012, 6 (13) :2002-2008
[15]   Online optimal consensus control of unknown linear multi-agent systems via time-based adaptive dynamic programming [J].
Liu, Yifan ;
Li, Tieshan ;
Shan, Qihe ;
Yu, Renhai ;
Wu, Yue ;
Chen, C. L. Philip .
NEUROCOMPUTING, 2020, 404 :137-144
[16]   Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning [J].
Modares, Hamidreza ;
Lewis, Frank L. ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (11) :2401-2410
[17]   H∞ Tracking Control of Completely Unknown Continuous-Time Systems via Off-Policy Reinforcement Learning [J].
Modares, Hamidreza ;
Lewis, Frank L. ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (10) :2550-2562
[18]   Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning [J].
Modares, Hamidreza ;
Lewis, Frank L. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (11) :3051-3056
[19]   Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement Learning [J].
Qin, Jiahu ;
Li, Man ;
Shi, Yang ;
Ma, Qichao ;
Zheng, Wei Xing .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (01) :85-96
[20]   Sliding mode leader-following consensus controllers for second-order non-linear multi-agent systems [J].
Ren, Chang-E ;
Chen, C. L. Philip .
IET CONTROL THEORY AND APPLICATIONS, 2015, 9 (10) :1544-1552