Tight Regret Bounds for Infinite-armed Linear Contextual Bandits

被引:0
作者
Li, Yingkai [1 ]
Wang, Yining [2 ]
Chen, Xi [3 ]
Zhou, Yuan [4 ]
机构
[1] Northwestern Univ, Evanston, IL 60208 USA
[2] Univ Florida, Gainesville, FL USA
[3] NYU, New York, NY USA
[4] Univ Illinois, Urbana, IL USA
来源
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS) | 2021年 / 130卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Linear contextual bandit is an important class of sequential decision making problems with a wide range of applications to recommender systems, online advertising, healthcare, and many other machine learning related tasks. While there is a lot of prior research, tight regret bounds of linear contextual bandit with infinite action sets remain open. In this paper, we address this open problem by considering the linear contextual bandit with (changing) infinite action sets. We prove a regret upper bound on the order of O(root d(2)T log T) x poly(log log T) where d is the domain dimension and T is the time horizon. Our upper bound matches the previous lower bound of Omega(root d(2)T log T) in [Li et al., 2019] up to iterated logarithmic terms.
引用
收藏
页码:370 / 378
页数:9
相关论文
共 11 条
  • [1] Abbasi-Yadkori Y, 2011, P ADV NEURAL INFORM
  • [2] [Anonymous], 2018, P ADV NEURAL INFORM
  • [3] [Anonymous], 2008, P C LEARN THEOR COLT
  • [4] Audibert Jean-Yves, 2009, P C LEARN THEOR COLT
  • [5] Auer P., 2003, Journal of Machine Learning Research, V3, P397, DOI 10.1162/153244303321897663
  • [6] Chu Wei, 2011, P INT C ART INT STAT
  • [7] Filippi S., 2010, P ADV NEURAL INFORM
  • [8] Li L, 2017, P INT C MACH LEARN I
  • [9] Li Y, 2019, P ANN C LEARN THEOR
  • [10] Linearly Parameterized Bandits
    Rusmevichientong, Paat
    Tsitsiklis, John N.
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2010, 35 (02) : 395 - 411