Offline Model-Based Adaptable Policy Learning for Decision-Making in Out-of-Support Regions

被引:0
作者
Chen, Xiong-Hui [1 ]
Luo, Fan-Ming [1 ]
Yu, Yang [1 ]
Li, Qingyang [2 ]
Qin, Zhiwei [2 ]
Shang, Wenjie [3 ]
Ye, Jieping [3 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
[2] DiDi Labs, Mountain View, CA 94043 USA
[3] DiDi Chuxing, Beijing 300450, Peoples R China
基金
美国国家科学基金会;
关键词
Adaptation models; Uncertainty; Predictive models; Behavioral sciences; Extrapolation; Trajectory; Reinforcement learning; Adaptable policy learning; meta learning; model-based reinforcement learning; offline reinforcement learning;
D O I
10.1109/TPAMI.2023.3317131
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In reinforcement learning, a promising direction to avoid online trial-and-error costs is learning from an offline dataset. Current offline reinforcement learning methods commonly learn in the policy space constrained to in-support regions by the offline dataset, in order to ensure the robustness of the outcome policies. Such constraints, however, also limit the potential of the outcome policies. In this paper, to release the potential of offline policy learning, we investigate the decision-making problems in out-of-support regions directly and propose offline Model-based Adaptable Policy LEarning (MAPLE). By this approach, instead of learning in in-support regions, we learn an adaptable policy that can adapt its behavior in out-of-support regions when deployed. We give a practical implementation of MAPLE via meta-learning techniques and ensemble model learning techniques. We conduct experiments on MuJoCo locomotion tasks with offline datasets. The results show that the proposed method can make robust decisions in out-of-support regions and achieve better performance than SOTA algorithms.
引用
收藏
页码:15260 / 15274
页数:15
相关论文
共 50 条
  • [21] Making machine learning matter to clinicians: model actionability in medical decision-making
    Ehrmann, Daniel E.
    Joshi, Shalmali
    Goodfellow, Sebastian D.
    Mazwi, Mjaye L.
    Eytan, Danny
    NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [22] Emergency decision-making method based on the cloud model
    Xia, Deng-You
    Qian, Xin-Ming
    Kang, Qing-Chun
    Duan, Zai-Peng
    Beijing Keji Daxue Xuebao/Journal of University of Science and Technology Beijing, 2014, 36 (07): : 972 - 978
  • [23] Computational and behavioral markers of model-based decision making in childhood
    Smid, Claire R.
    Kool, Wouter
    Hauser, Tobias U.
    Steinbeis, Nikolaus
    DEVELOPMENTAL SCIENCE, 2023, 26 (02)
  • [24] Intrusion Response Decision-making Method Based on Reinforcement Learning
    Yang, Jun-nan
    Zhang, Hong-qi
    Zhang, Chuan-fu
    2018 INTERNATIONAL CONFERENCE ON COMMUNICATION, NETWORK AND ARTIFICIAL INTELLIGENCE (CNAI 2018), 2018, : 154 - 162
  • [25] Research on Decision-Making in Emotional Agent Based on Reinforcement Learning
    Feng Chao
    Chen Lin
    Jiang Kui
    Wei Zhonglin
    Zhai Bing
    2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 1191 - 1194
  • [26] Multiagent game decision-making method based on the learning mechanism
    Wang R.
    Dong Q.
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2024, 46 (07): : 1251 - 1268
  • [27] A Comparative Study of Situation Awareness-Based Decision-Making Model Reinforcement Learning Adaptive Automation in Evolving Conditions
    Costa, Renato D.
    Hirata, Celso M.
    Pugliese, Victor U.
    IEEE ACCESS, 2023, 11 : 16166 - 16182
  • [28] An integrated model for coordinating adaptive platoons and parking decision-making based on deep reinforcement learning
    Li, Jia
    Guo, Zijian
    Jiang, Ying
    Wang, Wenyuan
    Li, Xin
    COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 203
  • [29] Decision-Making Model under Risk Assessment Based on Entropy
    Dong, Xin
    Lu, Hao
    Xia, Yuanpu
    Xiong, Ziming
    ENTROPY, 2016, 18 (11)
  • [30] Model-Based Imitation Learning Using Entropy Regularization of Model and Policy
    Uchibe, Eiji
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10922 - 10929