Data-Efficient Off-Policy Learning for Distributed Optimal Tracking Control of HMAS With Unidentified Exosystem Dynamics

被引:22
作者
Xu, Yong [1 ]
Wu, Zheng-Guang [2 ]
机构
[1] Beijing Inst Technol, Sch Automat, Beijing 100081, Peoples R China
[2] Zhejiang Univ, Inst Cyber Syst & Control, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Heuristic algorithms; Observers; Mathematical models; Approximation algorithms; Multi-agent systems; Symmetric matrices; Regulation; Adaptive observer; approximate dynamic programming (ADP); heterogeneous multiagent systems (HMASs); output tracking; reinforcement learning (RL); COOPERATIVE OUTPUT REGULATION; LINEAR MULTIAGENT SYSTEMS; ADAPTIVE OPTIMAL-CONTROL; CONTINUOUS-TIME SYSTEMS; SYNCHRONIZATION; CONSENSUS; OBSERVER; FEEDBACK;
D O I
10.1109/TNNLS.2022.3172130
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, a data-efficient off-policy reinforcement learning (RL) approach is proposed for distributed output tracking control of heterogeneous multiagent systems (HMASs) using approximate dynamic programming (ADP). Different from existing results that the kinematic model of the exosystem is addressable to partial or all agents, the dynamics of the exosystem are assumed to be completely unknown for all agents in this article. To solve this difficulty, an identifiable algorithm using the experience-replay method is designed for each agent to identify the system matrices of the novel reference model instead of the original exosystem. Then, an output-based distributed adaptive output observer is proposed to provide the estimations of the leader, and the proposed observer not only has a low dimension and less data transmission among agents but also is implemented in a fully distributed way. Besides, a data-efficient RL algorithm is given to design the optimal controller offline along with the system trajectories without solving output regulator equations. An ADP approach is developed to iteratively solve game algebraic Riccati equations (GAREs) using online information of state and input in an online way, which relaxes the requirement of knowing prior knowledge of agents' system matrices in an offline way. Finally, a numerical example is provided to verify the effectiveness of theoretical analysis.
引用
收藏
页码:3181 / 3190
页数:10
相关论文
共 37 条
[1]  
Bertsekas D.P., 2005, Dynamic programming and optimal control: Volume I
[2]   Output based adaptive distributed output observer for leader-follower multiagent systems [J].
Cai, He ;
Huang, Jie .
AUTOMATICA, 2021, 125
[3]   Distributed Tracking Control of an Interconnected Leader-Follower Multiagent System [J].
Cai, He ;
Hu, Guoqiang .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (07) :3494-3501
[4]   The adaptive distributed observer approach to the cooperative output regulation of linear multi-agent systems [J].
Cai, He ;
Lewis, Frank L. ;
Hu, Guoqiang ;
Huang, Jie .
AUTOMATICA, 2017, 75 :299-305
[5]   Homotopic policy iteration-based learning design for unknown linear continuous-time systemsx2729; [J].
Chen, Ci ;
Lewis, Frank L. ;
Li, Bo .
AUTOMATICA, 2022, 138
[6]   Reinforcement Learning-Based Adaptive Optimal Exponential Tracking Control of Linear Systems With Unknown Dynamics [J].
Chen, Ci ;
Modares, Hamidreza ;
Xie, Kan ;
Lewis, Frank L. ;
Wan, Yan ;
Xie, Shengli .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (11) :4423-4438
[7]   Concurrent learning adaptive control of linear systems with exponentially convergent bounds [J].
Chowdhary, Girish ;
Yucelen, Tansel ;
Muehlegg, Maximillian ;
Johnson, Eric N. .
INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2013, 27 (04) :280-301
[8]  
Chowdhary G, 2011, P AMER CONTR CONF, P3547
[9]   Resilient practical cooperative output regulation for MASs with unknown switching exosystem dynamics under DoS attacks [J].
Deng, Chao ;
Zhang, Dan ;
Feng, Gang .
AUTOMATICA, 2022, 139
[10]   Reinforcement Learning-Based Cooperative Optimal Output Regulation via Distributed Adaptive Internal Model [J].
Gao, Weinan ;
Mynuddin, Mohammed ;
Wunsch, Donald C. ;
Jiang, Zhong-Ping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) :5229-5240