Training feedforward neural networks with Bayesian hyper-heuristics

被引:0
作者
Schreuder, A. N. [1 ]
Bosman, A. S. [1 ]
Engelbrecht, A. P. [2 ,3 ]
Cleghorn, C. W. [4 ]
机构
[1] Univ Pretoria, Pretoria, Gauteng, South Africa
[2] Stellenbosch Univ, Stellenbosch, Western Cape, South Africa
[3] Gulf Univ Sci & Technol, Ctr Appl Math & Bioinformat, Mishref, Kuwait
[4] Univ Witwatersrand, Johannesburg, Gauteng, South Africa
关键词
Hyper-heuristics; Meta-learning; Feedforward neural networks; Supervised learning; Bayesian statistics; OPTIMIZATION;
D O I
10.1016/j.ins.2024.121363
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The process of training feedforward neural networks (FFNNs) can benefit from an automated process where the best heuristic to train the network is sought out automatically by means of a highlevel probabilistic-based heuristic. This research introduces a novel population-based Bayesian hyper-heuristic (BHH) that is used to train feedforward neural networks (FFNNs). The performance of the BHH is compared to that of ten popular low-level heuristics, each with different search behaviours. The chosen heuristic pool consists of classic gradient-based heuristics as well as meta- heuristics (MHs). The empirical process is executed on fourteen datasets consisting of classification and regression problems with varying characteristics. The BHH is shown to be able to train FFNNs well and provide an automated method for finding the best heuristic to train the FFNNs at various stages of the training process.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Collective Hyper-heuristics for Self-assembling Robot Behaviours
    Yu, Shuang
    Song, Andy
    Aleti, Aldeida
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 499 - 507
  • [42] Improving the Performance of Vector Hyper-heuristics through Local Search
    Carlos Ortiz-Bayliss, Jose
    Terashima-Marin, Hugo
    Enrique, Santiago
    Oezcan, Ender
    Parkes, Andrew J.
    PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2012, : 1269 - 1276
  • [43] Hyper-heuristics Reversed: Learning to Combine Solvers by Evolving Instances
    Amaya, Ivan
    Carlos Ortiz-Bayliss, Jose
    Conant-Pablos, Santiago
    Terashima-Marin, Hugo
    2019 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2019, : 1790 - 1797
  • [44] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [45] NEW ALGORITHMS FOR TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    PATTERN RECOGNITION LETTERS, 1994, 15 (03) : 295 - 298
  • [46] Exploring Classificational Cellular Automaton Hyper-heuristics for Solving the Knapsack Problem
    Zarate-Aranda, Jose Eduardo
    Ortiz-Bayliss, Jose Carlos
    ADVANCES IN SOFT COMPUTING, PT II, MICAI 2024, 2025, 15247 : 57 - 69
  • [47] Choice function based hyper-heuristics for multi-objective optimization
    Maashi, Mashael
    Kendall, Graham
    Oezcan, Ender
    APPLIED SOFT COMPUTING, 2015, 28 : 312 - 326
  • [48] Hyper-Heuristics for Online UAV Path Planning Under Imperfect Information
    Akar, Engin
    Topcuoglu, Haluk Rahmi
    Ermis, Murat
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, 2014, 8602 : 741 - 752
  • [49] Wireless edge device intelligent task offloading in mobile edge computing using hyper-heuristics
    Vijayaram, B.
    Vasudevan, V.
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2022, 2022 (01)
  • [50] Optimizing agents with genetic programming: an evaluation of hyper-heuristics in dynamic real-time logistics
    van Lon, Rinde R. S.
    Branke, Juergen
    Holvoet, Tom
    GENETIC PROGRAMMING AND EVOLVABLE MACHINES, 2018, 19 (1-2) : 93 - 120