Distributed Offline Reinforcement Learning

被引:0
|
作者
Heredia, Paulo [1 ]
George, Jemin [2 ]
Mou, Shaoshuai [1 ]
机构
[1] Purdue Univ, Sch Aeronaut & Astronaut, W Lafayette, IN 47907 USA
[2] US Army Res Lab, Adelphi, MD 20783 USA
来源
2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC) | 2022年
关键词
D O I
10.1109/CDC51059.2022.9992346
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the problem of offline reinforcement learning for a multi-agent system. Offline reinforcement learning differs from classical online and off-policy reinforcement learning settings in that agents must learn from a fixed and finite dataset. We consider a scenario where there exists a large dataset produced by interactions between an agent an its environment. We suppose the dataset is too large to be efficiently processed by an agent with limited resources, and so we consider a multi-agent network that cooperatively learns a control policy. We present a distributed reinforcement learning algorithm based on Q-learning and an approach called offline regularization. The main result of this work shows that the proposed algorithm converges in the sense that the norm squared error is asymptotically bounded by a constant, which is determined by the number of samples in the dataset. In the simulation, we have implemented the proposed algorithm to train agents to control both a linear system and a nonlinear system, namely the well-known cartpole system. We provide simulation results showing the performance of the trained agents.
引用
收藏
页码:4621 / 4626
页数:6
相关论文
共 50 条
  • [1] Byzantine-Robust Online and Offline Distributed Reinforcement Learning
    Chen, Yiding
    Zhang, Xuezhou
    Zhang, Kaiqing
    Wang, Mengdi
    Zhu, Xiaojin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [2] Offline Reinforcement Learning with Pseudometric Learning
    Dadashi, Robert
    Rezaeifar, Shideh
    Vieillard, Nino
    Hussenot, Leonard
    Pietquin, Olivier
    Geist, Matthieu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [3] Benchmarking Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 259 - 263
  • [4] Federated Offline Reinforcement Learning
    Zhou, Doudou
    Zhang, Yufeng
    Sonabend-W, Aaron
    Wang, Zhaoran
    Lu, Junwei
    Cai, Tianxi
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (548) : 3152 - 3163
  • [5] Learning Behavior of Offline Reinforcement Learning Agents
    Shukla, Indu
    Dozier, Haley. R.
    Henslee, Althea. C.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
  • [6] Offline Reinforcement Learning with Differential Privacy
    Qiao, Dan
    Wang, Yu-Xiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Bootstrapped Transformer for Offline Reinforcement Learning
    Wang, Kerong
    Zhao, Hanye
    Luo, Xufang
    Ren, Kan
    Zhang, Weinan
    Li, Dongsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Conservative Offline Distributional Reinforcement Learning
    Ma, Yecheng Jason
    Jayaraman, Dinesh
    Bastani, Osbert
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] On Efficient Sampling in Offline Reinforcement Learning
    Jia, Qing-Shan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1 - 6
  • [10] Conservative network for offline reinforcement learning
    Peng, Zhiyong
    Liu, Yadong
    Chen, Haoqiang
    Zhou, Zongtan
    KNOWLEDGE-BASED SYSTEMS, 2023, 282