Distributed Offline Reinforcement Learning

被引:0
|
作者
Heredia, Paulo [1 ]
George, Jemin [2 ]
Mou, Shaoshuai [1 ]
机构
[1] Purdue Univ, Sch Aeronaut & Astronaut, W Lafayette, IN 47907 USA
[2] US Army Res Lab, Adelphi, MD 20783 USA
来源
2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC) | 2022年
关键词
D O I
10.1109/CDC51059.2022.9992346
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the problem of offline reinforcement learning for a multi-agent system. Offline reinforcement learning differs from classical online and off-policy reinforcement learning settings in that agents must learn from a fixed and finite dataset. We consider a scenario where there exists a large dataset produced by interactions between an agent an its environment. We suppose the dataset is too large to be efficiently processed by an agent with limited resources, and so we consider a multi-agent network that cooperatively learns a control policy. We present a distributed reinforcement learning algorithm based on Q-learning and an approach called offline regularization. The main result of this work shows that the proposed algorithm converges in the sense that the norm squared error is asymptotically bounded by a constant, which is determined by the number of samples in the dataset. In the simulation, we have implemented the proposed algorithm to train agents to control both a linear system and a nonlinear system, namely the well-known cartpole system. We provide simulation results showing the performance of the trained agents.
引用
收藏
页码:4621 / 4626
页数:6
相关论文
共 50 条
  • [41] Offline Evaluation of Online Reinforcement Learning Algorithms
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1926 - 1933
  • [42] Efficient Offline Reinforcement Learning With Relaxed Conservatism
    Huang, Longyang
    Dong, Botao
    Zhang, Weidong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5260 - 5272
  • [43] Federated Offline Reinforcement Learning With Multimodal Data
    Wen, Jiabao
    Dai, Huiao
    He, Jingyi
    Xi, Meng
    Xiao, Shuai
    Yang, Jiachen
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4266 - 4276
  • [44] Is Pessimism Provably Efficient for Offline Reinforcement Learning?
    Jin, Ying
    Yang, Zhuoran
    Wang, Zhaoran
    MATHEMATICS OF OPERATIONS RESEARCH, 2024,
  • [45] Supported Policy Optimization for Offline Reinforcement Learning
    Wu, Jialong
    Wu, Haixu
    Qiu, Zihan
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [46] Improving Offline Reinforcement Learning with Inaccurate Simulators
    Hou, Yiwen
    Sun, Haoyuan
    Ma, Jinming
    Wu, Feng
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 5162 - 5168
  • [47] Corruption-Robust Offline Reinforcement Learning
    Zhang, Xuezhou
    Chen, Yiding
    Zhu, Jerry
    Sun, Wen
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 5757 - 5773
  • [48] Offline Quantum Reinforcement Learning in a Conservative Manner
    Cheng, Zhihao
    Zhang, Kaining
    Shen, Li
    Tao, Dacheng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7148 - 7156
  • [49] Advancing RAN Slicing with Offline Reinforcement Learning
    Yang, Kun
    Yeh, Shu-ping
    Zhang, Menglei
    Sydir, Jerry
    Yang, Jing
    Shen, Cong
    2024 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS, DYSPAN 2024, 2024, : 331 - 338
  • [50] Percentile Criterion Optimization in Offline Reinforcement Learning
    Lobo, Elita A.
    Cousins, Cyrus
    Zick, Yair
    Petrik, Marek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,