Latent Exploration for Reinforcement Learning

被引:0
作者
Chiappa, Alberto Silvio [1 ]
Vargas, Alessandro Marin [1 ]
Huang, Ann Zixiang [1 ,2 ]
Mathis, Alexander [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[2] Mila, Montreal, PQ, Canada
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it can be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way. While solutions to introduce time correlation across action perturbations exist, introducing correlation across actuators has been largely ignored. Here, we propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network, which can be seamlessly integrated with on- and off-policy algorithms. We demonstrate that the noisy actions generated by perturbing the network's activations can be modeled as a multivariate Gaussian distribution with a full covariance matrix. In the PyBullet locomotion tasks, Lattice-SAC achieves state of the art results, and reaches 18% higher reward than unstructured exploration in the Humanoid environment. In the musculoskeletal control environments of MyoSuite, Lattice-PPO achieves higher reward in most reaching and object manipulation tasks, while also finding more energy-efficient policies with reductions of 20-60%. Overall, we demonstrate the effectiveness of structured action noise in time and actuator space for complex motor control tasks. The code is available at: https://github.com/amathislab/lattice.
引用
收藏
页数:23
相关论文
共 58 条
  • [1] [Anonymous], 1992, REINFORCEMENT LEARNI
  • [2] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [3] Auer P., 2002, Journal of Machine Learning Research, V3, P397, DOI DOI 10.4271/610369
  • [4] Bengio Y., 2009, INT C MACH LEARN
  • [5] Burda Y., 2018, arXiv
  • [6] Caggiano V., 2022, MYOCHALLENGE LEARNIN
  • [7] Caggiano Vittorio, 2022, PMLR, P233
  • [8] Caggiano Vittorio, 2022, ARXIV
  • [9] Cesa-Bianchi N., 2017, Advances in Neural Information Processing Systems, V30
  • [10] Chapelle O, 2011, Advances in Neural Information Processing Systems, V24