Bridging Adversarial and Nonstationary Multi-Armed Bandit

被引:1
作者
Chen, Ningyuan [1 ]
Yang, Shuoguang [2 ]
Zhang, Hailun [3 ]
机构
[1] Univ Toronto, Rotman Sch Management, Toronto, ON, Canada
[2] Hong Kong Univ Sci & Technol, Dept Ind Engn & Decis Analyt, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
[3] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial Bandit; Multi-Armed Bandit; Regret Analysis;
D O I
10.1177/10591478251313780
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the multi-armed bandit framework, there are two formulations that are commonly employed to handle time-varying reward distributions: adversarial bandit and nonstationary bandit. Although their oracles, algorithms, and regret analysis differ significantly, we provide a unified formulation in this article that smoothly bridges the two as special cases. The formulation uses an oracle that takes the best action sequences within a switch budget. Depending on the switch budget, it turns into the oracle in hindsight in the adversarial bandit and the dynamic oracle in the nonstationary bandit. We provide algorithms that attain the optimal regret with the matching lower bound. The optimal regret displays distinct behavior in two regimes.
引用
收藏
页码:2218 / 2231
页数:14
相关论文
共 26 条
[1]  
Amir I, 2020, ADV NEUR IN, V33
[2]  
Auer P, 2003, SIAM J COMPUT, V32, P48, DOI 10.1137/S0097539701398375
[3]  
Auer P, 1995, AN S FDN CO, P322, DOI 10.1109/SFCS.1995.492488
[4]  
Auer P., 2016, Proceedings of the 29th Conference on Learning Theory, COLT 2016, P116
[5]  
Auer Peter, 2019, P MACHINE LEARNING R, V99
[6]  
Besbes O, 2014, ADV NEUR IN, V27
[7]  
Besbes O, 2019, Stochastic Systems, V9, P319, DOI [10.1287/stsy.2019.0033, DOI 10.1287/STSY.2019.0033, 10.1287/stsy.2019.0033]
[8]   Non-Stationary Stochastic Optimization [J].
Besbes, Omar ;
Gur, Yonatan ;
Zeevi, Assaf .
OPERATIONS RESEARCH, 2015, 63 (05) :1227-1244
[9]  
Bouneffouf Djallel., 2019, Working paper
[10]   Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems [J].
Bubeck, Sebastien ;
Cesa-Bianchi, Nicolo .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2012, 5 (01) :1-122