DCAC: Reducing Unnecessary Conservatism in Offline-to-online Reinforcement Learning

被引:1
|
作者
Chen, Dongxiang [1 ]
Wen, Ying [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
2023 5TH INTERNATIONAL CONFERENCE ON DISTRIBUTED ARTIFICIAL INTELLIGENCE, DAI 2023 | 2023年
关键词
Reinforcement Learning; Offline-to-online; Finetune;
D O I
10.1145/3627676.3627677
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in offline reinforcement learning (RL) have facilitated the training of powerful agents using fixed datasets exclusively. Despite this, the quality of a dataset plays a critical role in determining an agent's performance, and high-quality datasets are often scarce. This scarcity necessitates the enhancement of agents through subsequent environmental interactions. Particularly, the state-action distribution shift may exert a potentially detrimental effect on well-initialized policies, thus impeding the straightforward application of off-policy RL algorithms to policies trained offline. Predominant offline-to-online RL approaches are typically founded on conservatism, a characteristic that may inadvertently confine the asymptotic performance. In response, we propose a method referred to as Dynamically Constrained Actor-Critic (DCAC), grounded in the mathematical form of dynamically constrained policy optimization. This innovative method enables judicious adjustments to the constraints on policy optimization in accordance with a specified rule, thus stabilizing the initial online learning stage and reducing undue conservatism that restricts asymptotic performance. Through comprehensive experimentation across diverse locomotion tasks, we have ascertained that our method successfully improves the policies trained offline with various datasets via subsequent online environmental interactions. The empirical results substantiate that our method mitigates the harmful effects of distribution shift and consistently attains superior asymptotic performance in comparison to prior works.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Sample Efficient Offline-to-Online Reinforcement Learning
    Guo, Siyuan
    Zou, Lixin
    Chen, Hechang
    Qu, Bohao
    Chi, Haotian
    Yu, Philip S.
    Chang, Yi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (03) : 1299 - 1310
  • [2] Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
    Zheng, Han
    Luo, Xufang
    Wei, Pengfei
    Song, Xuan
    Li, Dongsheng
    Jiang, Jing
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11372 - 11380
  • [3] Learning Aerial Docking via Offline-to-Online Reinforcement Learning
    Tao, Yang
    Feng Yuting
    Yu, Yushu
    2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024, 2024, : 305 - 309
  • [4] Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
    Nakhaei, Mohammadreza
    Scannell, Aidan
    Pajarinen, Joni
    6TH ANNUAL LEARNING FOR DYNAMICS & CONTROL CONFERENCE, 2024, 242 : 1107 - 1121
  • [5] Effective Traffic Signal Control with Offline-to-Online Reinforcement Learning
    Ma, Jinming
    Wu, Feng
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5567 - 5573
  • [6] Weighting Online Decision Transformer with Episodic Memory for Offline-to-Online Reinforcement Learning
    Ma, Xiao
    Li, Wu-Jun
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 10793 - 10799
  • [7] Ensemble successor representations for task generalization in offline-to-online reinforcement learning
    Changhong WANG
    Xudong YU
    Chenjia BAI
    Qiaosheng ZHANG
    Zhen WANG
    Science China(Information Sciences), 2024, (07) : 240 - 255
  • [8] ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
    Zhao, Kai
    Hao, Jianye
    Ma, Yi
    Liu, Jinyi
    Zheng, Yan
    Meng, Zhaopeng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5563 - 5571
  • [9] Ensemble successor representations for task generalization in offline-to-online reinforcement learning
    Changhong WANG
    Xudong YU
    Chenjia BAI
    Qiaosheng ZHANG
    Zhen WANG
    Science China(Information Sciences), 2024, 67 (07) : 240 - 255
  • [10] Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness
    Wen, Xiaoyu
    Yu, Xudong
    Yang, Rui
    Chen, Haoyuan
    Bai, Chenjia
    Wang, Zhen
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 81 : 481 - 509