Distributed Bayesian optimisation framework for deep neuroevolution

被引:15
作者
Chandra, Rohitash [1 ,2 ]
Tiwari, Animesh [3 ]
机构
[1] Univ New South Wales, UNSW Data Sci Hub, Sydney, NSW, Australia
[2] Univ New South Wales, Sch Math & Stat, Sydney, NSW, Australia
[3] Indian Inst Technol Guwahati, Dept Civil Engn, Gauhati, Assam, India
关键词
Distributed evolutionary algorithms; Surrogate-assisted optimization; Bayesian optimisation; Parallel computing; Neuroevolution; PARTICLE SWARM OPTIMIZATION; RESPONSE-SURFACE METHODS; NEURAL-NETWORKS; EVOLUTIONARY ALGORITHMS; COOPERATIVE COEVOLUTION; GENETIC ALGORITHMS; DESIGN; COMPUTATION; INVERSION; MODELS;
D O I
10.1016/j.neucom.2021.10.045
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neuroevolution is a machine learning method for evolving neural networks parameters and topology with a high degree of flexibility that makes them applicable to a wide range of architectures. Neuroevolution has been popular in reinforcement learning and has also shown to be promising for deep learning. The major feature of Bayesian optimisation is in reducing computational load by approximating the actual model with an acquisition function (surrogate model) that is computationally cheaper. A major limitation of neuroevolution is the high computational time required for convergence since learning (evo-lution) typically does not utilize gradient information. Bayesian optimisation, which is also known as surrogate-assisted optimisation, has been popular for expensive engineering optimisation problems and hyper-parameter tuning in machine learning. It has potential for training deep learning models via neuroevolution given large datasets and complex models. Recent advances in parallel and distributed computing have enabled efficient implementation of neuroevolution for complex and computationally expensive neural models. In this paper, we present a Bayesian optimisation framework for deep neu-roevolution using a distributed architecture to provide computational efficiency in training. Our results demonstrate promising results for simple to deep neural network models such as convolutional neural networks which motivates further applications. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:51 / 65
页数:15
相关论文
共 114 条
[1]   Particle Swarm Optimization based on Island Models [J].
Abadlia, Houda ;
Smairi, Nadia ;
Ghedira, Khaled .
PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCO'17 COMPANION), 2017, :49-50
[2]   Parallelism and evolutionary algorithms [J].
Alba, E ;
Tomassini, M .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2002, 6 (05) :443-462
[3]   AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS [J].
ANGELINE, PJ ;
SAUNDERS, GM ;
POLLACK, JB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01) :54-65
[4]  
[Anonymous], 2012, COURSERA NEURAL NETW
[5]   Artificial life: organization, adaptation and complexity from the bottom up [J].
Bedau, MA .
TRENDS IN COGNITIVE SCIENCES, 2003, 7 (11) :505-512
[6]  
Blau T., ARXIV PREPRINT ARXIV
[7]   Defining a standard for particle swarm optimization [J].
Bratton, Daniel ;
Kennedy, James .
2007 IEEE SWARM INTELLIGENCE SYMPOSIUM, 2007, :120-+
[8]  
Brochu E., ARXIV PREPRINT ARXIV
[9]   Bayesian optimization for learning gaits under uncertainty [J].
Calandra, Roberto ;
Seyfarth, Andre ;
Peters, Jan ;
Deisenroth, Marc Peter .
ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2016, 76 (1-2) :5-23
[10]  
Cantu-Paz Erick, 1998, Calc. Paralleles, V10, P141