DCAC: Reducing Unnecessary Conservatism in Offline-to-online Reinforcement Learning

被引:1
|
作者
Chen, Dongxiang [1 ]
Wen, Ying [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
2023 5TH INTERNATIONAL CONFERENCE ON DISTRIBUTED ARTIFICIAL INTELLIGENCE, DAI 2023 | 2023年
关键词
Reinforcement Learning; Offline-to-online; Finetune;
D O I
10.1145/3627676.3627677
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in offline reinforcement learning (RL) have facilitated the training of powerful agents using fixed datasets exclusively. Despite this, the quality of a dataset plays a critical role in determining an agent's performance, and high-quality datasets are often scarce. This scarcity necessitates the enhancement of agents through subsequent environmental interactions. Particularly, the state-action distribution shift may exert a potentially detrimental effect on well-initialized policies, thus impeding the straightforward application of off-policy RL algorithms to policies trained offline. Predominant offline-to-online RL approaches are typically founded on conservatism, a characteristic that may inadvertently confine the asymptotic performance. In response, we propose a method referred to as Dynamically Constrained Actor-Critic (DCAC), grounded in the mathematical form of dynamically constrained policy optimization. This innovative method enables judicious adjustments to the constraints on policy optimization in accordance with a specified rule, thus stabilizing the initial online learning stage and reducing undue conservatism that restricts asymptotic performance. Through comprehensive experimentation across diverse locomotion tasks, we have ascertained that our method successfully improves the policies trained offline with various datasets via subsequent online environmental interactions. The empirical results substantiate that our method mitigates the harmful effects of distribution shift and consistently attains superior asymptotic performance in comparison to prior works.
引用
收藏
页数:12
相关论文
共 50 条
  • [11] Ensemble successor representations for task generalization in offline-to-online reinforcement learning
    Wang, Changhong
    Yu, Xudong
    Bai, Chenjia
    Zhang, Qiaosheng
    Wang, Zhen
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (07)
  • [12] Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness
    Wen, Xiaoyu
    Yu, Xudong
    Yang, Rui
    Chen, Haoyuan
    Bai, Chenjia
    Wang, Zhen
    Journal of Artificial Intelligence Research, 2024, 81 : 481 - 509
  • [13] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
    Zhang, Yinmin
    Liu, Jie
    Li, Chuming
    Niu, Yazhe
    Yang, Yaodong
    Liu, Yu
    Ouyang, Wanli
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16908 - 16916
  • [14] Efficient and Stable Offline-to-online Reinforcement Learning via Continual Policy Revitalization
    Kong, Rui
    Wu, Chenyang
    Gao, Chen-Xiao
    Zhang, Zongzhang
    Li, Ming
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4317 - 4325
  • [15] SUF: Stabilized Unconstrained Fine-Tuning for Offline-to-Online Reinforcement Learning
    Feng, Jiaheng
    Feng, Mingxiao
    Song, Haolin
    Zhou, Wengang
    Li, Houqiang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 11, 2024, : 11961 - 11969
  • [16] Decentralized Task Offloading in Edge Computing: An Offline-to-Online Reinforcement Learning Approach
    Lin, Hongcai
    Yang, Lei
    Guo, Hao
    Cao, Jiannong
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (06) : 1603 - 1615
  • [17] Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning
    Wang, Shenzhi
    Yang, Qisen
    Gao, Jiawei
    Lin, Matthieu
    Chen, Hao
    Wu, Liwei
    Jia, Ning
    Song, Shiji
    Huang, Gao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [18] O2OAT: Efficient Offline-to-Online Reinforcement Learning with Adaptive Transition Strategy
    Shi, Wei
    Huang, Honglan
    Liang, Xingxing
    Zhang, Longfei
    Yang, Fangjie
    Cheng, Guangquan
    Huang, Jincai
    Liu, Zhong
    Xu, Dan
    2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024, 2024, : 569 - 576
  • [19] An offline-to-online reinforcement learning approach based on multi-action evaluation with policy extension
    Cheng, Xuebo
    Huang, Xiaohui
    Huang, Zhichao
    Jiang, Nan
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12246 - 12271
  • [20] Efficient Offline Reinforcement Learning With Relaxed Conservatism
    Huang, Longyang
    Dong, Botao
    Zhang, Weidong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5260 - 5272