Global Bandits

被引:14
作者
Atan, Onur [1 ]
Tekin, Cem [2 ]
van der Schaar, Mihaela [1 ]
机构
[1] Univ Calif Los Angeles, Dept Elect Engn, Los Angeles, CA 90024 USA
[2] Bilkent Univ, Dept Elect & Elect Engn, TR-06800 Ankara, Turkey
基金
美国国家科学基金会;
关键词
Bounded regret; informative arms; multiarmed bandits (MABs); online learning; regret analysis; MULTIARMED BANDIT;
D O I
10.1109/TNNLS.2018.2818742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiarmed bandits (MABs) model sequential decision-making problems, in which a learner sequentially chooses arms with unknown reward distributions in order to maximize its cumulative reward. Most of the prior works on MAB assume that the reward distributions of each arm are independent. But in a wide variety of decision problems-from drug dosage to dynamic pricing-the expected rewards of different arms are correlated, so that selecting one arm provides information about the expected rewards of other arms as well. We propose and analyze a class of models of such decision problems, which we call global bandits (GB). In the case in which rewards of all arms are deterministic functions of a single unknown parameter, we construct a greedy policy that achieves bounded regret, with a bound that depends on the single true parameter of the problem. Hence, this policy selects suboptimal arms only finitely many times with probability one. For this case, we also obtain a bound on regret that is independent of the true parameter; this bound is sublinear, with an exponent that depends on the informativeness of the arms. We also propose a variant of the greedy policy that achieves O (root T) worst case and O(1) parameter-dependent regret. Finally, we perform experiments on dynamic pricing and show that the proposed algorithms achieve significant gains with respect to the well-known benchmarks.
引用
收藏
页码:5798 / 5811
页数:14
相关论文
共 43 条
[1]  
Abbasi-Yadkori Y, 2011, Advances in Neural Information Processing Systems, P2312
[2]  
Abbasi-Yadkori Yasin, 2012, P MACHINE LEARNING R, P1
[3]  
Agrawal S., 2012, PROC 25 ANN C LEARN, P39
[4]  
Agrawal S., 2013, PMLR, P127, DOI DOI 10.5555/3042817.3043073
[5]  
[Anonymous], OPTIMAL ALGORITHM LI
[6]  
[Anonymous], 2010, Advances in neural information processing systems
[7]  
[Anonymous], 2008, Technical report
[8]  
[Anonymous], 2015, J. Mach. Learn. Res.
[9]  
[Anonymous], 2008, Advances in neural information processing systems
[10]  
[Anonymous], P COLT