Distributed Offline Reinforcement Learning

被引:0
|
作者
Heredia, Paulo [1 ]
George, Jemin [2 ]
Mou, Shaoshuai [1 ]
机构
[1] Purdue Univ, Sch Aeronaut & Astronaut, W Lafayette, IN 47907 USA
[2] US Army Res Lab, Adelphi, MD 20783 USA
来源
2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC) | 2022年
关键词
D O I
10.1109/CDC51059.2022.9992346
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the problem of offline reinforcement learning for a multi-agent system. Offline reinforcement learning differs from classical online and off-policy reinforcement learning settings in that agents must learn from a fixed and finite dataset. We consider a scenario where there exists a large dataset produced by interactions between an agent an its environment. We suppose the dataset is too large to be efficiently processed by an agent with limited resources, and so we consider a multi-agent network that cooperatively learns a control policy. We present a distributed reinforcement learning algorithm based on Q-learning and an approach called offline regularization. The main result of this work shows that the proposed algorithm converges in the sense that the norm squared error is asymptotically bounded by a constant, which is determined by the number of samples in the dataset. In the simulation, we have implemented the proposed algorithm to train agents to control both a linear system and a nonlinear system, namely the well-known cartpole system. We provide simulation results showing the performance of the trained agents.
引用
收藏
页码:4621 / 4626
页数:6
相关论文
共 50 条
  • [11] Conservative Offline Distributional Reinforcement Learning
    Ma, Yecheng Jason
    Jayaraman, Dinesh
    Bastani, Osbert
    Advances in Neural Information Processing Systems, 2021, 23 : 19235 - 19247
  • [12] Offline reinforcement learning with task hierarchies
    Schwab, Devin
    Ray, Soumya
    MACHINE LEARNING, 2017, 106 (9-10) : 1569 - 1598
  • [13] Offline reinforcement learning with representations for actions
    Lou, Xingzhou
    Yin, Qiyue
    Zhang, Junge
    Yu, Chao
    He, Zhaofeng
    Cheng, Nengjie
    Huang, Kaiqi
    INFORMATION SCIENCES, 2022, 610 : 746 - 758
  • [14] Dual Generator Offline Reinforcement Learning
    Vuong, Quan
    Kumar, Aviral
    Levine, Sergey
    Chebotar, Yevgen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [15] A Minimalist Approach to Offline Reinforcement Learning
    Fujimoto, Scott
    Gu, Shixiang Shane
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [16] An Optimistic Perspective on Offline Reinforcement Learning
    Agarwal, Rishabh
    Schuurmans, Dale
    Norouzi, Mohammad
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [17] Offline Reinforcement Learning for Visual Navigation
    Shah, Dhruv
    Bhorkar, Arjun
    Leen, Hrish
    Kostrikov, Ilya
    Rhinehart, Nick
    Levine, Sergey
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 44 - 54
  • [18] Hyperparameter Tuning in Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 585 - 590
  • [19] Offline reinforcement learning with task hierarchies
    Devin Schwab
    Soumya Ray
    Machine Learning, 2017, 106 : 1569 - 1598
  • [20] Survival Instinct in Offline Reinforcement Learning
    Li, Anqi
    Misra, Dipendra
    Kolobov, Andrey
    Cheng, Ching-An
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,