Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement Learning

被引:101
作者
Qin, Jiahu [1 ]
Li, Man [1 ]
Shi, Yang [2 ]
Ma, Qichao [1 ]
Zheng, Wei Xing [3 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230027, Anhui, Peoples R China
[2] Univ Victoria, Dept Mech Engn, Victoria, BC V8W 2Y2, Canada
[3] Western Sydney Univ, Sch Comp Engn & Math, Sydney, NSW 2751, Australia
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Input saturation; multiagent systems; neural networks (NNs); off-policy reinforcement learning (RL); optimal synchronization control; LINEAR-SYSTEMS; NONLINEAR-SYSTEMS; NETWORKS; GAMES;
D O I
10.1109/TNNLS.2018.2832025
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we aim to investigate the optimal synchronization problem for a group of generic linear systems with input saturation. To seek the optimal controller, Hamilton Jacobi-Bellman (HJB) equations involving nonquadratic input energy terms in coupled forms are established. The solutions to these coupled HJB equations are further proven to be optimal and the induced controllers constitute interactive Nash equilibrium. Due to the difficulty to analytically solve HJB equations, especially in coupled forms, and the possible lack of model information of the systems, we apply the data-based off-policy reinforcement learning algorithm to learn the optimal control policies. A byproduct of this off-policy algorithm is shown that it is insensitive to probing noise that is exerted to the system to maintain persistence of excitation condition. In order to implement this off-policy algorithm, we employ actor and critic neural networks to approximate the controllers and the cost functions. Furthermore, the estimated control policies obtained by this presented implementation are proven to converge to the optimal ones under certain conditions. Finally, an illustrative example is provided to verify the effectiveness of the proposed algorithm.
引用
收藏
页码:85 / 96
页数:12
相关论文
共 29 条
[1]   Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach [J].
Abu-Khalaf, M ;
Lewis, FL .
AUTOMATICA, 2005, 41 (05) :779-791
[2]  
[Anonymous], 2003, SYNCHRONIZATION UNIV
[3]  
Bertsekas Dimitri P., 2007, Dynamic programming and optimal control., VII
[5]   A comprehensive survey of multiagent reinforcement learning [J].
Busoniu, Lucian ;
Babuska, Robert ;
De Schutter, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02) :156-172
[6]   Optimal Linear-Consensus Algorithms: An LQR Perspective [J].
Cao, Yongcan ;
Ren, Wei .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2010, 40 (03) :819-830
[7]  
Elkind E., 2006, P EC, P100
[8]  
Engelbrecht A. P., 2007, COMPUTATIONAL INTELL, V2nd ed.
[9]  
Hu T S., 2001, CONTROL ENGN SER
[10]   H∞ control with constrained input for completely unknown nonlinear systems using data-driven reinforcement learning method [J].
Jiang, He ;
Zhang, Huaguang ;
Luo, Yanhong ;
Cui, Xiaohong .
NEUROCOMPUTING, 2017, 237 :226-234