Multi-armed bandits: Theory and applications to online learning in networks

被引:0
作者
Zhao Q. [1 ]
机构
[1] Zhao, Qing
来源
Zhao, Qing | 1600年 / Morgan and Claypool Publishers卷 / 12期
基金
欧盟地平线“2020”; 美国国家科学基金会;
关键词
Machine learning; Markov decision processes; Multi-armed bandit; Online learning; Reinforcement learning;
D O I
10.2200/S00941ED2V01Y201907CNT022
中图分类号
学科分类号
摘要
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools Bayesian and frequentist of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other. © 2020 by Morgan & Claypool. All rights reserved.
引用
收藏
页码:1 / 165
页数:164
相关论文
共 213 条
[1]  
Abbasi-Yadkori Y., Pal D., Szepesvari C., Improved algorithms for linear stochastic bandits, Proc. Of Conference on Neural Information Processing Systems (NeurIPS), pp. 2312-2320, (2011)
[2]  
Agarwal A., Foster D.P., Hsu D., Kakade S.M., Rakhlin A., Stochastic convex optimization with bandit feedback, SIAM Journal Optimization, 23, pp. 213-240, (2013)
[3]  
Aghion P., Bolton C.H.P., Jullien B., Optimal learning by experimentation, The Review of Economic Studies, 58, pp. 621-654, (1991)
[4]  
Agrawal R., Sample mean based index policies by O. log n/ regret for the multi-armed bandit problem, Advances in Applied Probability, 27, pp. 1054-1078, (1995)
[5]  
Agrawal R., The continuum-armed bandit problem, SIAM Journal on Control and Optimization, 33, pp. 1926-1951, (1995)
[6]  
Agrawal S., Goyal N., Analysis of Thompson sampling for the multi-armed bandit problem, Proc. Of Conference on Learning Theory (COLT), (2012)
[7]  
Agrawal S., Goyal N., Further optimal regret bounds for Thompson sampling, Proc. Of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS), (2013)
[8]  
Agrawal R., Teneketzis D., Certainty equivalence control with forcing: Revisited, Systems and Control Letters, 13, 5, pp. 405-412, (1989)
[9]  
Ahmad S.H., Liu M., Javadi T., Zhao Q., Krishnamachari B., Optimality of myopic sensing in multi-channel opportunistic access, IEEE Transactions on Information Theory, 55, pp. 4040-4050, (2009)
[10]  
Albert A.E., The sequential design of experiments for infinitely many states of nature, The Annals of Mathematics Statistics, 32, pp. 774-799, (1961)