Data-efficient model-based reinforcement learning with trajectory discrimination

被引:1
作者
Qu, Tuo [1 ]
Duan, Fuqing [1 ]
Zhang, Junge [2 ]
Zhao, Bo [3 ]
Huang, Wenzhen [2 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, 19 Xinjiekou Outer St, Beijing 100875, Peoples R China
[2] Chinese Acad Sci, Inst Automat, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
[3] Beijing Normal Univ, Sch Syst Sci, 19 Xinjiekou Outer St, Beijing 100875, Peoples R China
关键词
Reinforcement learning; Deep learning; Continuous control task; World model; OBJECTIVE PENALTY-FUNCTION; PREDICTIVE CONTROL; TRACKING; OPTIMIZATION;
D O I
10.1007/s40747-023-01247-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning has always been used to solve high-dimensional complex sequential decision problems. However, one of the biggest challenges for reinforcement learning is sample efficiency, especially for high-dimensional complex problems. Model-based reinforcement learning can solve the problem with a learned world model, but the performance is limited by the imperfect world model, so it usually has worse approximate performance than model-free reinforcement learning. In this paper, we propose a novel model-based reinforcement learning algorithm called World Model with Trajectory Discrimination (WMTD). We learn the representation of temporal dynamics information by adding a trajectory discriminator to the world model, and then compute the weight of state value estimation based on the trajectory discriminator to optimize the policy. Specifically, we augment the trajectories to generate negative samples and train a trajectory discriminator that shares the feature extractor with the world model. Experimental results demonstrate that our method improves the sample efficiency and achieves state-of-the-art performance on DeepMind control tasks.
引用
收藏
页码:1927 / 1936
页数:10
相关论文
共 50 条
[31]   Model-Based Reinforcement Learning With Isolated Imaginations [J].
Pan, Minting ;
Zhu, Xiangming ;
Zheng, Yitao ;
Wang, Yunbo ;
Yang, Xiaokang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) :2788-2803
[32]   Asynchronous Methods for Model-Based Reinforcement Learning [J].
Zhang, Yunzhi ;
Clavera, Ignasi ;
Tsai, Boren ;
Abbeel, Pieter .
CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
[33]   Model-based reinforcement learning for active ventilated tiles control in data centers [J].
Wen J.-W. ;
Zhang L. ;
Duan Y.-D. ;
Li L.-X. .
Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (06) :1051-1056
[34]   Safe Reinforcement Learning for Model-Reference Trajectory Tracking of Uncertain Autonomous Vehicles With Model-Based Acceleration [J].
Hu, Yifan ;
Fu, Junjie ;
Wen, Guanghui .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (03) :2332-2344
[35]   Practical Reinforcement Learning Using Time-Efficient Model-Based Policy Optimization [J].
Huang, Wenjun ;
Cui, Yunduan ;
Peng, Lei ;
Wu, Xinyu .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 :14436-14447
[36]   Model gradient: unified model and policy learning in model-based reinforcement learning [J].
Jia, Chengxing ;
Zhang, Fuxiang ;
Xu, Tian ;
Pang, Jing-Cheng ;
Zhang, Zongzhang ;
Yu, Yang .
FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (04)
[37]   Model gradient: unified model and policy learning in model-based reinforcement learning [J].
Chengxing Jia ;
Fuxiang Zhang ;
Tian Xu ;
Jing-Cheng Pang ;
Zongzhang Zhang ;
Yang Yu .
Frontiers of Computer Science, 2024, 18
[38]   A data-efficient self-supervised deep learning model for design and characterization of nanophotonic structures [J].
Ma, Wei ;
Liu, Yongmin .
SCIENCE CHINA-PHYSICS MECHANICS & ASTRONOMY, 2020, 63 (08)
[39]   Model Predictive Control-Based Value Estimation for Efficient Reinforcement Learning [J].
Wu, Qizhen ;
Liu, Kexin ;
Chen, Lei .
IEEE INTELLIGENT SYSTEMS, 2024, 39 (03) :63-72
[40]   Model-Based Transfer Reinforcement Learning Based on Graphical Model Representations [J].
Sun, Yuewen ;
Zhang, Kun ;
Sun, Changyin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) :1035-1048