Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning

被引:5
|
作者
Liu, Xiyuan [1 ]
Wu, Jia [1 ]
Chen, Senpeng [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
基金
美国国家科学基金会;
关键词
Hyperparameters optimization; Reinforcement learning; Meta-learning; Deep learning; CLASSIFIERS;
D O I
10.1007/s00500-023-08050-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyperparameter optimization plays a significant role in the overall performance of machine learning algorithms. However, the computational cost of algorithm evaluation can be extremely high for complex algorithm or large dataset. In this paper, we propose a model-based reinforcement learning with experience variable and meta-learning optimization method to speed up the training process of hyperparameter optimization. Specifically, an RL agent is employed to select hyperparameters and treat the k-fold cross-validation result as a reward signal to update the agent. To guide the agent's policy update, we design an embedding representation called "experience variable" and dynamically update it during the training process. Besides, we employ a predictive model to predict the performance of machine learning algorithm with the selected hyperparameters and limit the model rollout in short horizon to reduce the impact of the inaccuracy of the model. Finally, we use the meta-learning technique to pre-train the model for fast adapting to a new task. To prove the advantages of our method, we conduct experiments on 25 real HPO tasks and the experimental results show that with the limited computational resources, the proposed method outperforms the state-of-the-art Bayesian methods and evolution method.
引用
收藏
页码:8661 / 8678
页数:18
相关论文
共 50 条
  • [1] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Xiyuan Liu
    Jia Wu
    Senpeng Chen
    Soft Computing, 2023, 27 : 8661 - 8678
  • [2] Efficient hyperparameter optimization through model-based reinforcement learning
    Wu, Jia
    Chen, SenPeng
    Liu, XiYuan
    NEUROCOMPUTING, 2020, 409 : 381 - 393
  • [3] Meta-learning in Reinforcement Learning
    Schweighofer, N
    Doya, K
    NEURAL NETWORKS, 2003, 16 (01) : 5 - 9
  • [4] Optimization on selecting XGBoost hyperparameters using meta-learning
    Lima Marinho, Tiago
    do Nascimento, Diego Carvalho
    Pimentel, Bruno Almeida
    EXPERT SYSTEMS, 2024, 41 (09)
  • [5] Towards Continual Reinforcement Learning through Evolutionary Meta-Learning
    Grbic, Djordje
    Risi, Sebastian
    PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCCO'19 COMPANION), 2019, : 119 - 120
  • [6] Efficient state synchronisation in model-based testing through reinforcement learning
    Turker, Uraz Cengiz
    Hierons, Robert M.
    Mousavi, Mohammad Reza
    Tyukin, Ivan Y.
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 368 - 380
  • [7] Data-Efficient Task Generalization via Probabilistic Model-Based Meta Reinforcement Learning
    Bhardwaj, Arjun
    Rothfuss, Jonas
    Sukhija, Bhavya
    As, Yarden
    Hutter, Marco
    Coros, Stelian
    Krause, Andreas
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3918 - 3925
  • [8] Fast Human-in-the-Loop Control for HVAC Systems via Meta-Learning and Model-Based Offline Reinforcement Learning
    Chen, Liangliang
    Meng, Fei
    Zhang, Ying
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2023, 8 (03): : 504 - 521
  • [9] Online Meta-Learning for Hybrid Model-Based Deep Receivers
    Raviv, Tomer
    Park, Sangwoo
    Simeone, Osvaldo
    Eldar, Yonina C.
    Shlezinger, Nir
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (10) : 6415 - 6431
  • [10] A context-based meta-reinforcement learning approach to efficient hyperparameter optimization
    Liu, Xiyuan
    Wu, Jia
    Chen, Senpeng
    NEUROCOMPUTING, 2022, 478 : 89 - 103