DCAC: Reducing Unnecessary Conservatism in Offline-to-online Reinforcement Learning

被引:1
|
作者
Chen, Dongxiang [1 ]
Wen, Ying [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
2023 5TH INTERNATIONAL CONFERENCE ON DISTRIBUTED ARTIFICIAL INTELLIGENCE, DAI 2023 | 2023年
关键词
Reinforcement Learning; Offline-to-online; Finetune;
D O I
10.1145/3627676.3627677
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in offline reinforcement learning (RL) have facilitated the training of powerful agents using fixed datasets exclusively. Despite this, the quality of a dataset plays a critical role in determining an agent's performance, and high-quality datasets are often scarce. This scarcity necessitates the enhancement of agents through subsequent environmental interactions. Particularly, the state-action distribution shift may exert a potentially detrimental effect on well-initialized policies, thus impeding the straightforward application of off-policy RL algorithms to policies trained offline. Predominant offline-to-online RL approaches are typically founded on conservatism, a characteristic that may inadvertently confine the asymptotic performance. In response, we propose a method referred to as Dynamically Constrained Actor-Critic (DCAC), grounded in the mathematical form of dynamically constrained policy optimization. This innovative method enables judicious adjustments to the constraints on policy optimization in accordance with a specified rule, thus stabilizing the initial online learning stage and reducing undue conservatism that restricts asymptotic performance. Through comprehensive experimentation across diverse locomotion tasks, we have ascertained that our method successfully improves the policies trained offline with various datasets via subsequent online environmental interactions. The empirical results substantiate that our method mitigates the harmful effects of distribution shift and consistently attains superior asymptotic performance in comparison to prior works.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Learning Invisible Markers for Hidden Codes in Offline-to-online Photography
    Jia, Jun
    Gao, Zhongpai
    Zhu, Dandan
    Min, Xiongkuo
    Zhai, Guangtao
    Yang, Xiaokang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2263 - 2272
  • [22] Offline Evaluation of Online Reinforcement Learning Algorithms
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1926 - 1933
  • [23] Efficient Online Reinforcement Learning with Offline Data
    Ball, Philip J.
    Smith, Laura
    Kostrikov, Ilya
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [24] A Novel Deep Offline-to-Online Transfer Learning Framework for Pipeline Leakage Detection With Small Samples
    Wang, Chuang
    Wang, Zidong
    Liu, Weibo
    Shen, Yuxuan
    Dong, Hongli
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [25] Online and Offline Reinforcement Learning by Planning with a Learned Model
    Schrittwieser, Julian
    Hubert, Thomas
    Mandhane, Amol
    Barekatain, Mohammadamin
    Antonoglou, Ioannis
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [26] A Novel Deep Offline-to-Online Transfer Learning Framework for Pipeline Leakage Detection With Small Samples
    Wang, Chuang
    Wang, Zidong
    Liu, Weibo
    Shen, Yuxuan
    Dong, Hongli
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [27] Influence of incentive frames on offline-to-online interaction of outdoor advertising
    Wei, Zhiyong
    Dou, Wenyu
    Jiang, Qingyun
    Gu, Chenyan
    JOURNAL OF RETAILING AND CONSUMER SERVICES, 2021, 58
  • [28] A Novel Deep Offline-to-Online Transfer Learning Framework for Pipeline Leakage Detection With Small Samples
    Wang, Chuang
    Wang, Zidong
    Liu, Weibo
    Shen, Yuxuan
    Dong, Hongli
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [29] Hybrid Online and Offline Reinforcement Learning for Tibetan Jiu Chess
    Li, Xiali
    Lv, Zhengyu
    Wu, Licheng
    Zhao, Yue
    Xu, Xiaona
    COMPLEXITY, 2020, 2020
  • [30] RLSynC: Offline-Online Reinforcement Learning for Synthon Completion
    Baker, Frazier N.
    Chen, Ziqi
    Adu-Ampratwum, Daniel
    Ning, Xia
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (17) : 6723 - 6735