Evolving hierarchical memory-prediction machines in multi-task reinforcement learning

被引:0
作者
Stephen Kelly
Tatiana Voegerl
Wolfgang Banzhaf
Cedric Gondro
机构
[1] Michigan State University,BEACON Center for the Study of Evolution in Action
来源
Genetic Programming and Evolvable Machines | 2021年 / 22卷
关键词
Genetic programming; Reinforcement learning; Temporal memory; Multi-task;
D O I
暂无
中图分类号
学科分类号
摘要
A fundamental aspect of intelligent agent behaviour is the ability to encode salient features of experience in memory and use these memories, in combination with current sensory information, to predict the best action for each situation such that long-term objectives are maximized. The world is highly dynamic, and behavioural agents must generalize across a variety of environments and objectives over time. This scenario can be modeled as a partially-observable multi-task reinforcement learning problem. We use genetic programming to evolve highly-generalized agents capable of operating in six unique environments from the control literature, including OpenAI’s entire Classic Control suite. This requires the agent to support discrete and continuous actions simultaneously. No task-identification sensor inputs are provided, thus agents must identify tasks from the dynamics of state variables alone and define control policies for each task. We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory. The resulting agents are competitive with task-specific agents in all six environments. Furthermore, the hierarchical structure of programs allows for dynamic run-time complexity, which results in relatively efficient operation.
引用
收藏
页码:573 / 605
页数:32
相关论文
共 63 条
[1]  
Greff K(2017)Lstm: a search space odyssey IEEE Trans. Neural Netw. Learn. Syst. 28 2222-2232
[2]  
Srivastava RK(2015)Evolutionary model building under streaming data for classification tasks: opportunities and challenges Genet. Program. Evol. Mach. 16 283-326
[3]  
Koutník J(2018)Discovering agent behaviors through code reuse: examples from half-field offense and Ms Pac Man IEEE Trans. Games 10 195-208
[4]  
Steunebrink BR(2018)Emergent solutions to high-dimensional multitask reinforcement learning Evol. Comput. 26 347-380
[5]  
Schmidhuber J(1978)A simple model for the balance between selection and mutation J. Appl. Prob. 15 1-12
[6]  
Heywood MI(2017)Overcoming catastrophic forgetting in neural networks Proc. National Acad. Sci. 114 3521-3526
[7]  
Kelly S(2015)Human-level control through deep reinforcement learning Nature 518 529-533
[8]  
Heywood MI(1999)Evolutionary algorithms for reinforcement learning J. Artif. Int. Res. 11 241-276
[9]  
Kelly S(2019)Reinforcement learning in artificial and biological systems Nat. Mach. Intell. 1 133-143
[10]  
Heywood MI(2013)Dynamical genetic programming in Xcsf Evol. Comput. 21 361-387