Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-Reinforcement Learning

被引:0
作者
Akuzawa, Kei [1 ]
Iwasawa, Yusuke [1 ]
Matsuo, Yutaka [1 ]
机构
[1] Univ Tokyo, Grad Sch Engn, Tokyo, Japan
来源
LEARNING FOR DYNAMICS AND CONTROL, VOL 144 | 2021年 / 144卷
关键词
Meta-reinforcement learning; Partially observable Markov decision process; State space models; Amortized inference; Disentanglement;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There is considerable interest in designing meta-reinforcement learning (meta-RL) algorithms, which enable autonomous agents to adapt new tasks from small amount of experience. In meta-RL, the specification (such as reward function) of current task is hidden from the agent. In addition, states are hidden within each task owing to sensor noise or limitations in realistic environments. Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience. To address this, we propose estimating disentangled belief about task and states, leveraging an inductive bias that the task and states can be regarded as global and local features of each task. Specifically, we train a hierarchical state-space model (HSSM) parameterized by deep neural networks as an environment model, whose global and local latent variables correspond to task and states, respectively. Because the HSSM does not allow analytical computation of posterior distribution, i.e., belief, we employ amortized inference to approximate it. After the belief is obtained, we can augment observations of a model-free policy with the belief to efficiently train the policy. Moreover, because task and state information are factorized and interpretable, the downstream policy training is facilitated compared with the prior methods that did not consider the hierarchical nature. Empirical validations on a GridWorld environment confirm that the HSSM can separate the hidden task and states information. Then, we compare the meta-RL agent with the HSSM to prior meta-RL methods in MuJoCo environments, and confirm that our agent requires less training data and reaches higher final performance.
引用
收藏
页数:14
相关论文
共 36 条
[1]  
Alemi AA, 2018, PR MACH LEARN RES, V80
[2]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[3]  
Böhmer W, 2013, J MACH LEARN RES, V14, P2067
[4]  
Bowman Samuel R., 2016, SIGNLL, P10
[5]  
Chen Xi, 2017, PROC 5 INT C LEARNIN
[6]  
Chevalier-Boisvert Maxime, 2019, INT C LEARNING REPRE
[7]  
Duan Y, 2016, PR MACH LEARN RES, V48
[8]  
Duan Yan, 2017, INT C LEARNING REPRE
[9]  
Finn C, 2018, ADV NEUR IN, V31
[10]  
Finn C, 2017, PR MACH LEARN RES, V70