Recruitment-imitation mechanism for evolutionary reinforcement learning

被引:24
作者
Lu, Shuai [1 ,2 ]
Han, Shuai [1 ,2 ]
Zhou, Wenbo [1 ,2 ]
Zhang, Junwei [1 ,2 ]
机构
[1] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Evolutionary reinforcement learning; Reinforcement learning; Evolutionary algorithms; Imitation learning;
D O I
10.1016/j.ins.2020.12.017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning, evolutionary algorithms and imitation learning are three principal methods to deal with continuous control tasks. Reinforcement learning is sample efficient, yet sensitive to hyperparameters settings and needs efficient exploration; Evolutionary algorithms are stable, but with low sample efficiency; Imitation learning is both sample efficient and stable, however it requires the guidance of expert data. In this paper, we propose Recruitment-imitation Mechanism (RIM) for evolutionary reinforcement learning, a scalable framework that combines advantages of the three methods mentioned above. The core of this framework is a dual-actors and single critic reinforcement learning agent. This agent can recruit high-fitness actors from the population performing evolutionary algorithms, which instructs itself to learn from experience replay buffer. At the same time, low-fitness actors in the evolutionary population can imitate behavior patterns of the reinforcement learning agent and promote their fitness level. Reinforcement and imitation learners in this framework can be replaced with any off-policy actor-critic reinforcement learner and data-driven imitation learner. We evaluate RIM on a series of benchmarks for continuous control tasks in Mujoco. The experimental results show that RIM outperforms prior evolutionary or reinforcement learning methods. The performance of RIM's components is significantly better than components of previous evolutionary reinforcement learning algorithm, and the recruitment using soft update enables reinforcement learning agent to learn faster than that using hard update. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:172 / 188
页数:17
相关论文
共 44 条
  • [1] [Anonymous], 2018, ADV NEURAL INFORM PR
  • [2] Attia A., ARXIV180106503
  • [3] Brockman G., ARXIV160601540
  • [4] Codevilla F, 2018, IEEE INT CONF ROBOT, P4693
  • [5] Reinforcement learning versus evolutionary computation: A survey on hybrid algorithms
    Drugan, Madalina M.
    [J]. SWARM AND EVOLUTIONARY COMPUTATION, 2019, 44 : 228 - 246
  • [6] Duan Y, 2016, PR MACH LEARN RES, V48
  • [7] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [8] A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots
    Giusti, Alessandro
    Guzzi, Jerome
    Ciresan, Dan C.
    He, Fang-Lin
    Rodriguez, Juan P.
    Fontana, Flavio
    Faessler, Matthias
    Forster, Christian
    Schmidhuber, Jurgen
    Di Caro, Gianni
    Scaramuzza, Davide
    Gambardella, Luca M.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (02): : 661 - 667
  • [9] Haarnoja T, 2018, PR MACH LEARN RES, V80
  • [10] Hasselt H., 2010, ADV NEURAL INFORM PR, V23, DOI DOI 10.5555/2997046.2997187