Ensemble successor representations for task generalization in offline-to-online reinforcement learning

被引:0
|
作者
Changhong WANG [1 ]
Xudong YU [1 ]
Chenjia BAI [2 ,3 ]
Qiaosheng ZHANG [2 ]
Zhen WANG [4 ]
机构
[1] Space Control and Inertial Technology Research Center, Harbin Institute of Technology
[2] Shanghai Artificial Intelligence Laboratory
[3] Shenzhen Research Institute of Northwestern Polytechnical University
[4] School of Cybersecurity, Northwestern Polytechnical
关键词
D O I
暂无
中图分类号
TP181 [自动推理、机器学习];
学科分类号
摘要
In reinforcement learning(RL), training a policy from scratch with online experiences can be inefficient because of the difficulties in exploration. Recently, offline RL provides a promising solution by giving an initialized offline policy, which can be refined through online interactions. However, existing approaches primarily perform offline and online learning in the same task, without considering the task generalization problem in offline-to-online adaptation. In real-world applications, it is common that we only have an offline dataset from a specific task while aiming for fast online-adaptation for several tasks. To address this problem, our work builds upon the investigation of successor representations for task generalization in online RL and extends the framework to incorporate offline-to-online learning. We demonstrate that the conventional paradigm using successor features cannot effectively utilize offline data and improve the performance for the new task by online fine-tuning. To mitigate this, we introduce a novel methodology that leverages offline data to acquire an ensemble of successor representations and subsequently constructs ensemble Q functions. This approach enables robust representation learning from datasets with different coverage and facilitates fast adaption of Q functions towards new tasks during the online fine-tuning phase.Extensive empirical evaluations provide compelling evidence showcasing the superior performance of our method in generalizing to diverse or even unseen tasks.
引用
收藏
页码:240 / 255
页数:16
相关论文
共 50 条
  • [1] Ensemble successor representations for task generalization in offline-to-online reinforcement learning
    Changhong WANG
    Xudong YU
    Chenjia BAI
    Qiaosheng ZHANG
    Zhen WANG
    Science China(Information Sciences), 2024, (07) : 240 - 255
  • [2] Ensemble successor representations for task generalization in offline-to-online reinforcement learning
    Wang, Changhong
    Yu, Xudong
    Bai, Chenjia
    Zhang, Qiaosheng
    Wang, Zhen
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (07)
  • [3] Sample Efficient Offline-to-Online Reinforcement Learning
    Guo, Siyuan
    Zou, Lixin
    Chen, Hechang
    Qu, Bohao
    Chi, Haotian
    Yu, Philip S.
    Chang, Yi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (03) : 1299 - 1310
  • [4] Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
    Zheng, Han
    Luo, Xufang
    Wei, Pengfei
    Song, Xuan
    Li, Dongsheng
    Jiang, Jing
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11372 - 11380
  • [5] Decentralized Task Offloading in Edge Computing: An Offline-to-Online Reinforcement Learning Approach
    Lin, Hongcai
    Yang, Lei
    Guo, Hao
    Cao, Jiannong
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (06) : 1603 - 1615
  • [6] Learning Aerial Docking via Offline-to-Online Reinforcement Learning
    Tao, Yang
    Feng Yuting
    Yu, Yushu
    2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024, 2024, : 305 - 309
  • [7] Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
    Nakhaei, Mohammadreza
    Scannell, Aidan
    Pajarinen, Joni
    6TH ANNUAL LEARNING FOR DYNAMICS & CONTROL CONFERENCE, 2024, 242 : 1107 - 1121
  • [8] DCAC: Reducing Unnecessary Conservatism in Offline-to-online Reinforcement Learning
    Chen, Dongxiang
    Wen, Ying
    2023 5TH INTERNATIONAL CONFERENCE ON DISTRIBUTED ARTIFICIAL INTELLIGENCE, DAI 2023, 2023,
  • [9] Effective Traffic Signal Control with Offline-to-Online Reinforcement Learning
    Ma, Jinming
    Wu, Feng
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5567 - 5573
  • [10] Weighting Online Decision Transformer with Episodic Memory for Offline-to-Online Reinforcement Learning
    Ma, Xiao
    Li, Wu-Jun
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 10793 - 10799