Optimized Leader-Follower Consensus Control Using Reinforcement Learning for a Class of Second-Order Nonlinear Multiagent Systems

被引:46
|
作者
Wen, Guoxing [1 ]
Li, Bin [2 ]
机构
[1] Binzhou Univ, Coll Sci, Binzhou 256600, Shandong, Peoples R China
[2] Qilu Univ Technol, Sch Math & Stat, Shandong Acad Sci, Jinan 250353, Shandong, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2022年 / 52卷 / 09期
基金
中国国家自然科学基金;
关键词
Optimal control; Multi-agent systems; Artificial neural networks; Heuristic algorithms; Reinforcement learning; Consensus control; Topology; Double integrator dynamic; multiagent system; neural network (NN); optimal control; reinforcement learning (RL); unknown nonlinear dynamic; HJB EQUATION; NETWORKS;
D O I
10.1109/TSMC.2021.3130070
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, an optimized leader-follower consensus control is proposed for a class of second-order unknown nonlinear dynamical multiagent system. Different with the first-order multiagent consensus, the second-order case needs to achieve the agreement not only on position but also on velocity, therefore this optimized control is more challenging and interesting. To derive the control, reinforcement learning (RL) can be a natural consideration because it can overcome the difficulty of solving the Hamilton-Jacobi-Bellman (HJB) equation. To implement RL, it needs to iterate both adaptive critic and actor networks each other. However, if this optimized control learns RL from most existing optimal methods that derives the critic and actor adaptive laws from the negative gradient of square of the approximating function of the HJB equation, this control algorithm will be very intricate, because the HJB equation correlated to a second-order nonlinear multiagent system will become very complex due to strong state coupling and nonlinearity. In this work, since the two RL adaptive laws are derived via implementing the gradient descent method to a simple positive function, which is obtained on the basis of a partial derivative of the HJB equation, this optimized control is significantly simple. Meanwhile, it not merely can avoid the requirement of known dynamic acknowledge, but also can release the condition of persistent excitation, which is demanded in most RL optimization methods for training the adaptive parameter more sufficiently. Finally, the proposed control is demonstrated by both theory and computer simulation.
引用
收藏
页码:5546 / 5555
页数:10
相关论文
共 50 条
  • [21] Leader-follower consensus control of Lipschitz nonlinear systems by output feedback
    Isira, Ahmad Sadhiqin Mohd
    Zuo, Zongyu
    Ding, Zhengtao
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2016, 47 (16) : 3772 - 3781
  • [22] Finite-time leader-follower consensus control of multiagent systems with mismatched disturbances
    Gu, Lixue
    Zhao, Zhanshan
    Sun, Jie
    Wang, Zhangang
    ASIAN JOURNAL OF CONTROL, 2022, 24 (02) : 722 - 731
  • [23] Data-Driven Optimal Synchronization Control for Leader-Follower Multiagent Systems
    Zhou, Yuanqiang
    Li, Dewei
    Gao, Furong
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (01): : 495 - 503
  • [24] Leader-Follower Affine Formation Control of Second-Order Nonlinear Uncertain Multi-Agent Systems
    Zhi, Hui
    Chen, Liangming
    Li, Chuanjiang
    Guo, Yanning
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (12) : 3547 - 3551
  • [25] Consensus control of second-order leader-follower multi-agent systems with event-triggered strategy
    Xie, Duosi
    Yuan, Deming
    Lu, Junwei
    Zhang, Yijun
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2013, 35 (04) : 426 - 436
  • [26] Optimal Leader-Follower Consensus for Constrained-Input Multiagent Systems With Completely Unknown Dynamics
    Shi, Jing
    Yue, Dong
    Xie, Xiangpeng
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (02): : 1182 - 1191
  • [27] Fully Distributed Optimal Consensus for a Class of Second-Order Nonlinear Multiagent Systems With Switching Topologies
    Zou, Wencheng
    Zhou, Jiantao
    Yang, Yongliang
    Xiang, Zhengrong
    IEEE SYSTEMS JOURNAL, 2023, 17 (01): : 1548 - 1558
  • [28] Leader-follower consensus for a class of nonlinear multi-agent systems
    Xing-Hu Wang
    Hai-Bo Ji
    International Journal of Control, Automation and Systems, 2012, 10 : 27 - 35
  • [29] Leader-Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader Using Reinforcement Learning
    Yang, Yongliang
    Modares, Hamidreza
    Wunsch, Donald C., II
    Yin, Yixin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) : 2139 - 2153
  • [30] Pinning Control of Lag-Consensus for Second-Order Nonlinear Multiagent Systems
    Wang, Yi
    Ma, Zhongjun
    Zheng, Song
    Chen, Guanrong
    IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (08) : 2203 - 2211