Formation Tracking of Spatiotemporal Multiagent Systems: A Decentralized Reinforcement Learning Approach

被引:1
|
作者
Liu, Tianrun [1 ]
Chen, Yang-Yang [1 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
来源
IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE | 2024年 / 10卷 / 04期
基金
中国国家自然科学基金;
关键词
Training; Reinforcement learning; Artificial neural networks; Observers; Orbits; Spatiotemporal phenomena; Safety; Numerical models; Optimization; Multi-agent systems;
D O I
10.1109/MSMC.2024.3401404
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article investigates the formation tracking problem for discrete-time uncertain spatiotemporal multiagent systems (MASs). Note that the common multiagent reinforcement learning (MARL) method requires the actions and states of all agents to train the centralized critic; hence, this method may be impractical in constrained communication. Therefore, a decentralized RL framework is proposed that combines a neural network boundary approximation distributed observer (NNBADO) and an intelligent nonaffine leader (INL). As a result, the formation tracking problem for each agent can be modeled as a partially observable Markov decision process (POMDP). A novel RL formation tracking algorithm is designed based on a fusion reward scheme synthesizing the orbit tracking and formation objectives. The experiment results show that our algorithm can improve the formation accuracy.
引用
收藏
页码:52 / 60
页数:9
相关论文
共 50 条
  • [1] Adaptive Learning: A New Decentralized Reinforcement Learning Approach for Cooperative Multiagent Systems
    Li, Meng-Lin
    Chen, Shaofei
    Chen, Jing
    IEEE ACCESS, 2020, 8 : 99404 - 99421
  • [2] Decentralized Reinforcement Learning Inspired by Multiagent Systems
    Adjodah, Dhaval
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1729 - 1730
  • [3] Event-Triggered Multigradient Recursive Reinforcement Learning Tracking Control for Multiagent Systems
    Bai, Weiwei
    Li, Tieshan
    Long, Yue
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 366 - 379
  • [4] Consensus Tracking of Disturbed Second-Order Multiagent Systems With Actuator Attacks: Reinforcement-Learning-Based Approach
    Liu, Huawei
    Wen, Guanghui
    Fu, Junjie
    Luo, Zhexin
    Zheng, Dezhi
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025,
  • [5] Inverse Reinforcement Learning for Decentralized Non-Cooperative Multiagent Systems
    Reddy, Tummalapalli Sudhamsh
    Gopikrishna, Vamsikrishna
    Zaruba, Gergely
    Huber, Manfred
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 1930 - 1935
  • [6] Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults
    Wu, Yuxia
    Liang, Hongjing
    Xuan, Shuxing
    Ahn, Choon Ki
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (03): : 1935 - 1947
  • [7] A REINFORCEMENT LEARNING APPROACH FOR MULTIAGENT NAVIGATION
    Martinez-Gil, Francisco
    Barber, Fernando
    Lozano, Miguel
    Grimaldo, Francisco
    Fernandez, Fernando
    ICAART 2010: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1: ARTIFICIAL INTELLIGENCE, 2010, : 607 - 610
  • [8] Attentive Relational State Representation in Decentralized Multiagent Reinforcement Learning
    Liu, Xiangyu
    Tan, Ying
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (01) : 252 - 264
  • [9] Reinforcement Learning With Task Decomposition for Cooperative Multiagent Systems
    Sun, Changyin
    Liu, Wenzhang
    Dong, Lu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2054 - 2065
  • [10] Optimized Formation Control Using Simplified Reinforcement Learning for a Class of Multiagent Systems With Unknown Dynamics
    Wen, Guoxing
    Chen, C. L. Philip
    Li, Bin
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2020, 67 (09) : 7879 - 7888