Simultaneously Evolving Deep Reinforcement Learning Models using Multifactorial Optimization

被引:7
作者
Martinez, Aritz D. [1 ]
Osaba, Eneko [1 ]
Del Ser, Javier [1 ,2 ]
Herrera, Francisco [3 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Derio 48160, Bizkaia, Spain
[2] Univ Basque Country, Bilbao 48013, Bizkaia, Spain
[3] Univ Granada, DaSCI Andalusian Inst Data Sci & Computat Intelli, Granada 18071, Spain
来源
2020 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC) | 2020年
关键词
Multifactorial Optimization; Deep Reinforcement Learning; Transfer Learning; Evolutionary Algorithm;
D O I
10.1109/cec48606.2020.9185667
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, Multifactorial Optimization (MFO) has gained a notable momentum in the research community. MFO is known for its inherent capability to efficiently address multiple optimization tasks at the same time, while transferring information among such tasks to improve their convergence speed. On the other hand, the quantum leap made by Deep Q Learning (DQL) in the Machine Learning field has allowed facing Reinforcement Learning (RL) problems of unprecedented complexity. Unfortunately, complex DQL models usually find it difficult to converge to optimal policies due to the lack of exploration or sparse rewards. In order to overcome these drawbacks, pre-trained models are widely harnessed via Transfer Learning, extrapolating knowledge acquired in a source task to the target task. Besides, meta-heuristic optimization has been shown to reduce the lack of exploration of DQL models. This work proposes a MFO framework capable of simultaneously evolving several DQL models towards solving interrelated RL tasks. Specifically, our proposed framework blends together the benefits of meta-heuristic optimization, Transfer Learning and DQL to automate the process of knowledge transfer and policy learning of distributed RL agents. A thorough experimentation is presented and discussed so as to assess the performance of the framework, its comparison to the traditional methodology for Transfer Learning in terms of convergence, speed and policy quality, and the intertask relationships found and exploited over the search process.
引用
收藏
页数:8
相关论文
共 23 条
  • [21] Taylor ME, 2009, J MACH LEARN RES, V10, P1633
  • [22] Wen YW, 2017, IEEE C EVOL COMPUTAT, P2404, DOI 10.1109/CEC.2017.7969596
  • [23] Yosinski J, 2014, ADV NEUR IN, V27