RHOASo: An Early Stop Hyper-Parameter Optimization Algorithm

被引:2
|
作者
Munoz Castaneda, Angel Luis [1 ,2 ]
DeCastro-Garcia, Noemi [1 ,2 ]
Escudero Garcia, David [2 ]
机构
[1] Univ Leon, Dept Math, Leon 24007, Spain
[2] Univ Leon, Res Inst Appl Sci Cybersecur RIASC, Leon 24007, Spain
关键词
hyperparameters; machine learning; optimization; inference; PARTICLE SWARM; RANDOM SEARCH;
D O I
10.3390/math9182334
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
This work proposes a new algorithm for optimizing hyper-parameters of a machine learning algorithm, RHOASo, based on conditional optimization of concave asymptotic functions. A comparative analysis of the algorithm is presented, giving particular emphasis to two important properties: the capability of the algorithm to work efficiently with a small part of a dataset and to finish the tuning process automatically, that is, without making explicit, by the user, the number of iterations that the algorithm must perform. Statistical analyses over 16 public benchmark datasets comparing the performance of seven hyper-parameter optimization algorithms with RHOASo were carried out. The efficiency of RHOASo presents the positive statistically significant differences concerning the other hyper-parameter optimization algorithms considered in the experiments. Furthermore, it is shown that, on average, the algorithm needs around 70% of the iterations needed by other algorithms to achieve competitive performance. The results show that the algorithm presents significant stability regarding the size of the used dataset partition.
引用
收藏
页数:52
相关论文
共 50 条
  • [1] Random search for hyper-parameter optimization
    Département D'Informatique et de Recherche Opérationnelle, Université de Montréal, Montréal, QC, H3C 3J7, Canada
    J. Mach. Learn. Res., (281-305):
  • [2] Random Search for Hyper-Parameter Optimization
    Bergstra, James
    Bengio, Yoshua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 281 - 305
  • [3] Hyper-parameter Optimization for Latent Spaces
    Veloso, Bruno
    Caroprese, Luciano
    Konig, Matthias
    Teixeira, Sonia
    Manco, Giuseppe
    Hoos, Holger H.
    Gama, Joao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 249 - 264
  • [4] Federated learning with hyper-parameter optimization
    Kundroo, Majid
    Kim, Taehong
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (09)
  • [5] Gradient Hyper-parameter Optimization for Manifold Regularization
    Becker, Cassiano O.
    Ferreira, Paulo A. V.
    2013 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2013), VOL 2, 2013, : 339 - 344
  • [6] Bayesian Optimization for Accelerating Hyper-parameter Tuning
    Vu Nguyen
    2019 IEEE SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2019, : 302 - 305
  • [7] A Comparative study of Hyper-Parameter Optimization Tools
    Shekhar, Shashank
    Bansode, Adesh
    Salim, Asif
    2021 IEEE ASIA-PACIFIC CONFERENCE ON COMPUTER SCIENCE AND DATA ENGINEERING (CSDE), 2021,
  • [8] Efficient Hyper-parameter Optimization with Cubic Regularization
    Shen, Zhenqian
    Yang, Hansi
    Li, Yong
    Kwok, James
    Yao, Quanming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] A New Approach Towards the Combined Algorithm Selection and Hyper-parameter Optimization Problem
    Guo, Xin
    van Stein, Bas
    Back, Thomas
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 2042 - 2049
  • [10] Hyper-parameter Optimization Using Continuation Algorithms
    Rojas-Delgado, Jairo
    Jimenez, J. A.
    Bello, Rafael
    Lozano, J. A.
    METAHEURISTICS, MIC 2022, 2023, 13838 : 365 - 377