Battle royale optimizer for training multi-layer perceptron

被引:0
作者
Saeid Agahian
Taymaz Akan
机构
[1] Erzurum Technical University,Department of Computer Engineering
[2] University of Pardubice,Faculty of Electrical Engineering and Informatics
[3] Ayvansaray University,Software Engineering Department
来源
Evolving Systems | 2022年 / 13卷
关键词
Feed-forward neural network; Neural network training; Multilayer perceptron; Battle royale optimization; Metaheuristic;
D O I
暂无
中图分类号
学科分类号
摘要
Artificial neural network (ANN) is one of the most successful tools in machine learning. The success of ANN mostly depends on its architecture and learning procedure. Multi-layer perceptron (MLP) is a popular form of ANN. Moreover, backpropagation is a well-known gradient-based approach for training MLP. Gradient-based search approaches have a low convergence rate; therefore, they may get stuck in local minima, which may lead to performance degradation. Training the MLP is accomplished based on minimizing the total network error, which can be considered as an optimization problem. Stochastic optimization algorithms are proven to be effective when dealing with such problems. Battle royale optimization (BRO) is a recently proposed population-based metaheuristic algorithm which can be applied to single-objective optimization over continuous problem spaces. The proposed method has been compared with backpropagation (Generalized learning delta rule) and six well-known optimization algorithms on ten classification benchmark datasets. Experiments confirm that, according to error rate, accuracy, and convergence, the proposed approach yields promising results and outperforms its competitors.
引用
收藏
页码:563 / 575
页数:12
相关论文
共 60 条
  • [1] Aljarah I(2018)Optimizing connection weights in neural networks using the whale optimization algorithm Soft Computing 22 1-15
  • [2] Faris H(2014)Bird mating optimizer: an optimization algorithm inspired by bird mating strategies Commun Nonlinear Sci Numer Simul 19 1213-1228
  • [3] Mirjalili S(2013)Artificial neural network training using a new efficient optimization algorithm Applied Soft Computing 13 1206-1213
  • [4] Askarzadeh A(2019)Hybrid particle swarm optimization-genetic algorithm trained multi-layer perceptron for classification of human glioma from molecular brain neoplasia data Cogn Syst Res 58 173-194
  • [5] Askarzadeh A(2017)Particle swarm optimization trained neural network for structural failure prediction of multistoried RC buildings Neural Comput Appl 28 2005-2016
  • [6] Rezazadeh A(1999)Ant colony optimization: a new meta-heuristic Proc Congr Evol Comput 2 1470-1477
  • [7] Bhattacharjee K(2003)Differential evolution training algorithm for feed-forward neural networks Neural Process Lett 17 93-105
  • [8] Pant M(2015)Optimization of neural network model using modified bat-inspired algorithm Appl Soft Comput 37 71-86
  • [9] Chatterjee S(2007)A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm J Global Optim 39 459-471
  • [10] Sarkar S(1943)A logical calculus of the ideas immanent in nervous activity Bull Math Biophys 5 115-133