Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

被引:0
|
作者
Hung Tran-The [1 ]
Gupta, Sunil [1 ]
Rana, Santu [1 ]
Venkatesh, Svetha [1 ]
机构
[1] Deakin Univ, Appl Artificial Intelligence Inst, Geelong, Vic, Australia
来源
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 | 2022年 / 151卷
基金
澳大利亚研究理事会;
关键词
BAYESIAN OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of O((gamma T)root T) , where gamma T is the maximum information gain between T observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise's sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Failure-Aware Gaussian Process Optimization with Regret Bounds
    Iwazaki, Shogo
    Takeno, Shion
    Tanabe, Tomohiko
    Irie, Mitsuru
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret
    Calandriello, Daniele
    Carratino, Luigi
    Lazaric, Alessandro
    Valko, Michal
    Rosasco, Lorenzo
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [3] Tight Regret Bounds for Noisy Optimization of a Brownian Motion
    Wang, Zexin
    Tan, Vincent Y. F.
    Scarlett, Jonathan
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 1072 - 1087
  • [4] Examples of inconsistency in optimization by expected improvement
    Dmitry Yarotsky
    Journal of Global Optimization, 2013, 56 : 1773 - 1790
  • [5] Expected improvement for expensive optimization: a review
    Zhan, Dawei
    Xing, Huanlai
    JOURNAL OF GLOBAL OPTIMIZATION, 2020, 78 (03) : 507 - 544
  • [6] Examples of inconsistency in optimization by expected improvement
    Yarotsky, Dmitry
    JOURNAL OF GLOBAL OPTIMIZATION, 2013, 56 (04) : 1773 - 1790
  • [7] Expected Regret Minimization for Bayesian Optimization with Student's-t Processes
    Clare, Conor
    Hawe, Glenn
    McClean, Sally
    AIPR 2020: 2020 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION, 2020, : 8 - 12
  • [8] Optimal Service Caching and Pricing in Edge Computing: A Bayesian Gaussian Process Bandit Approach
    Tutuncuoglu, Feridun
    Dan, Gyorgy
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (01) : 705 - 718
  • [9] Expected coordinate improvement for high-dimensional Bayesian optimization
    Zhan, Dawei
    SWARM AND EVOLUTIONARY COMPUTATION, 2024, 91
  • [10] Accounting for Gaussian Process Imprecision in Bayesian Optimization
    Rodemann, Julian
    Augustin, Thomas
    INTEGRATED UNCERTAINTY IN KNOWLEDGE MODELLING AND DECISION MAKING (IUKM 2022), 2022, 13199 : 92 - 104