Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes

被引:0
作者
Xu, Mengdi [1 ]
Ding, Wenhao [1 ]
Zhu, Jiacheng [1 ]
Liu, Zuxin [1 ]
Chen, Baiming [1 ]
Zhao, Ding [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
基金
美国安德鲁·梅隆基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continuously learning to solve unseen tasks with limited experience has been extensively pursued in meta-learning and continual learning, but with restricted assumptions such as accessible task distributions, independently and identically distributed tasks, and clear task delineations. However, real-world physical tasks frequently violate these assumptions, resulting in performance degradation. This paper proposes a continual online model-based reinforcement learning approach that does not require pre-training to solve task-agnostic problems with unknown task boundaries. We maintain a mixture of experts to handle nonstationarity, and represent each different type of dynamics with a Gaussian Process to efficiently leverage collected data and expressively model uncertainty. We propose a transition prior to account for the temporal dependencies in streaming data and update the mixture online via sequential variational inference. Our approach reliably handles the task distribution shift by generating new models for never-before-seen dynamics and reusing old models for previously seen dynamics. In experiments, our approach outperforms alternative methods in non-stationary tasks, including classic control with changing dynamics and decision making in different driving scenarios. Codes available at: https://github.com/mxu34/mbrl-gpmm.
引用
收藏
页数:12
相关论文
共 46 条
[1]   Task-Free Continual Learning [J].
Aljundi, Rahaf ;
Kelchtermans, Klaas ;
Tuytelaars, Tinne .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11246-11255
[2]  
[Anonymous], 2013, Advances in neural information processing systems
[3]  
[Anonymous], 2007, Proceedings of the International Conference on Machine Learning (ICML)
[4]   Variational Inference for Dirichlet Process Mixtures [J].
Blei, David M. ;
Jordan, Michael I. .
BAYESIAN ANALYSIS, 2006, 1 (01) :121-143
[5]  
Candy P.C., 1991, Self-direction for lifelong learning: a comprehensive guide to theory and practice
[6]  
Chen Z., 2016, SYNTH LECT ARTIF INT, V10, P1, DOI DOI 10.2200/S00737ED1V01Y201610AIM033
[7]  
Chua K, 2018, ADV NEUR IN, V31
[8]  
Clavera I., 2019, P INT C LEARN REPR
[9]  
Dahl David B, 2005, J COMPUTATIONAL GRAP, V11
[10]  
Deleu Tristan, 2018, ARXIV181202159