Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning

被引:5
|
作者
Liu, Xiyuan [1 ]
Wu, Jia [1 ]
Chen, Senpeng [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
基金
美国国家科学基金会;
关键词
Hyperparameters optimization; Reinforcement learning; Meta-learning; Deep learning; CLASSIFIERS;
D O I
10.1007/s00500-023-08050-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyperparameter optimization plays a significant role in the overall performance of machine learning algorithms. However, the computational cost of algorithm evaluation can be extremely high for complex algorithm or large dataset. In this paper, we propose a model-based reinforcement learning with experience variable and meta-learning optimization method to speed up the training process of hyperparameter optimization. Specifically, an RL agent is employed to select hyperparameters and treat the k-fold cross-validation result as a reward signal to update the agent. To guide the agent's policy update, we design an embedding representation called "experience variable" and dynamically update it during the training process. Besides, we employ a predictive model to predict the performance of machine learning algorithm with the selected hyperparameters and limit the model rollout in short horizon to reduce the impact of the inaccuracy of the model. Finally, we use the meta-learning technique to pre-train the model for fast adapting to a new task. To prove the advantages of our method, we conduct experiments on 25 real HPO tasks and the experimental results show that with the limited computational resources, the proposed method outperforms the state-of-the-art Bayesian methods and evolution method.
引用
收藏
页码:8661 / 8678
页数:18
相关论文
共 50 条
  • [41] Asynchronous Methods for Model-Based Reinforcement Learning
    Zhang, Yunzhi
    Clavera, Ignasi
    Tsai, Boren
    Abbeel, Pieter
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [42] Efficient Meta-Learning for Continual Learning with Taylor Expansion Approximation
    Zou, Xiaohan
    Lin, Tong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [43] UAV Maneuvering Target Tracking in Uncertain Environments Based on Deep Reinforcement Learning and Meta-Learning
    Li, Bo
    Gan, Zhigang
    Chen, Daqing
    Sergey Aleksandrovich, Dyachenko
    REMOTE SENSING, 2020, 12 (22) : 1 - 20
  • [44] Efficient Meta-Learning through Task-Specific Pseudo Labelling
    Lee, Sanghyuk
    Lee, Seunghyun
    Song, Byung Cheol
    ELECTRONICS, 2023, 12 (13)
  • [45] FASTER OPTIMIZATION-BASED META-LEARNING ADAPTATION PHASE
    Khabarlak, K. S.
    RADIO ELECTRONICS COMPUTER SCIENCE CONTROL, 2022, (01) : 82 - 92
  • [46] Predictive model-based multi-objective optimization with life-long meta-learning for designing unreliable production systems
    Mahmoodi, Ehsan
    Fathi, Masood
    Ng, Amos H. C.
    Dolgui, Alexandre
    COMPUTERS & OPERATIONS RESEARCH, 2025, 178
  • [47] TOWARDS ROBUSTNESS: ENHANCING DEEP LEARNING MODELS THROUGH META-LEARNING AND BILEVEL OPTIMIZATION FOR ACCURATE CAR DAMAGE CLASSIFICATION
    Mallem, Soufiane
    Nakib, Amir
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1435 - 1439
  • [48] Meta-learning for efficient unsupervised domain adaptation
    Vettoruzzo, Anna
    Bouguelia, Mohamed-Rafik
    Roegnvaldsson, Thorsteinn
    NEUROCOMPUTING, 2024, 574
  • [49] Meta-learning for evolutionary parameter optimization of classifiers
    Reif, Matthias
    Shafait, Faisal
    Dengel, Andreas
    MACHINE LEARNING, 2012, 87 (03) : 357 - 380
  • [50] Meta-learning approach to neural network optimization
    Kordik, Pavel
    Koutnik, Jan
    Drchal, Jan
    Kovarik, Oleg
    Cepek, Miroslav
    Snorek, Miroslav
    NEURAL NETWORKS, 2010, 23 (04) : 568 - 582