Multi-armed bandits: Theory and applications to online learning in networks

被引:5
作者
Zhao Q. [1 ]
机构
[1] Zhao, Qing
来源
Zhao, Qing | 1600年 / Morgan and Claypool Publishers卷 / 12期
基金
欧盟地平线“2020”; 美国国家科学基金会;
关键词
Machine learning; Markov decision processes; Multi-armed bandit; Online learning; Reinforcement learning;
D O I
10.2200/S00941ED2V01Y201907CNT022
中图分类号
学科分类号
摘要
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools Bayesian and frequentist of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other. © 2020 by Morgan & Claypool. All rights reserved.
引用
收藏
页码:1 / 165
页数:164
相关论文
共 213 条
[11]  
Allenberg C., Auer P., Gyorfi L., Ottucsak G., Hannan consistency in on-line learning in case of unbounded losses under partial monitoring, Proc. Of International Conference on Algorithmic Learning Theory (ALT), pp. 229-243, (2006)
[12]  
Allesiardo R., Feraud R., Exp3 with drift detection for the switching bandit problem, Proc. Of IEEE International Conference on Data Science and Advanced Analytics (DSAA), (2015)
[13]  
Alon N., Cesa-Bianchi N., Dekel O., Koren T., Online learning with feedback graphs: Beyond bandits, Proc. Of the 28th Conference on Learning Theory (COLT), 40, pp. 23-35, (2015)
[14]  
Anandkumar A., Michael N., Tang A.K., Swami A., Distributed algorithms for learning and cognitive medium access with logarithmic regret, IEEE Journal on Selected Areas in Communications, 29, pp. 731-745, (2011)
[15]  
Anantharam V., Varaiya P., Walrand J., Asymptotically efficient allocation rules for the multi-armed bandit problem with multiple plays-Part I: I. I. D. rewards, IEEE Transactions on Automatic Control, 32, pp. 968-975, (1987)
[16]  
Anantharam V., Varaiya P., Walrand J., Asymptotically efficient allocation rules for the multi-armed bandit problem with multiple plays-Part II: Markovian rewards, IEEE Transaction on Automatic Control, 32, pp. 977-982, (1987)
[17]  
Arrow K., Social Choice and Individual Values, (1951)
[18]  
Asawa M., Teneketzis D., Multi-armed bandits with switching penalties, IEEE Transactions on Automatic Control, 41, pp. 328-348, (1996)
[19]  
Audibert J., Bubeck S., Minimax policies for adversarial and stochastic bandits, Proc. Of the 22nd Annual Conference on Learning Theory (COLT)., (2009)
[20]  
Audibert J., Bubeck S., Regret bounds and minimax policies under partial monitoring, Journal of Machine Learning Research, pp. 2785-2836, (2010)