Optimal Learning for Structured Bandits

被引:0
|
作者
Van Parys, Bart [1 ]
Golrezaei, Negin [1 ]
机构
[1] MIT, Sloan Sch Management, Cambridge, MA 02142 USA
关键词
structured bandits; online learning; convex duality; mimicking regret lower bound;
D O I
10.1287/mnsc.2020.02108
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
We study structured multiarmed bandits, which is the problem of online decision-making under uncertainty in the presence of structural information. In this problem, the decision-maker needs to discover the best course of action despite observing only uncertain rewards over time. The decision-maker is aware of certain convex structural information regarding the reward distributions; that is, the decision-maker knows that the reward distributions of the arms belong to a convex compact set. In the presence of such structural information, the decision-maker then would like to minimize his or her regret by exploiting this information, where the regret is its performance difference against a benchmark policy that knows the best action ahead of time. In the absence of structural information, the classical upper confidence bound (UCB) and Thomson sampling algorithms are well known to suffer minimal regret. However, as recently pointed out by Russo and Van Roy (2018) and Lattimore and Szepesvari (2017), neither algorithm is capable of exploiting structural information that is commonly available in practice. We propose a novel learning algorithm that we call "DUSA," whose regret matches the information-theoretic regret lower bound up to a constant factor and can handle a wide range of structural information. Our algorithm DUSA solves a dual counterpart of the regret lower bound at the empirical reward distribution and follows its suggested play. We show that this idea leads to the first computationally viable learning policy with asymptotic minimal regret for various structural information, including well-known structured bandits such as linear, Lipschitz, and convex bandits and novel structured bandits that have not been studied in the literature because of the lack of a unified and flexible framework.
引用
收藏
页码:3951 / 3998
页数:49
相关论文
共 50 条
  • [1] A UNIFIED APPROACH TO TRANSLATE CLASSICAL BANDIT ALGORITHMS TO STRUCTURED BANDITS
    Gupta, Samarth
    Chaudhari, Shreyas
    Mukherjee, Subhojyoti
    Joshi, Gauri
    Yagan, Osman
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3360 - 3364
  • [2] Online Learning of Rested and Restless Bandits
    Tekin, Cem
    Liu, Mingyan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2012, 58 (08) : 5588 - 5611
  • [3] Decentralized Learning for Multiplayer Multiarmed Bandits
    Kalathil, Dileep
    Nayyar, Naumaan
    Jain, Rahul
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2014, 60 (04) : 2331 - 2345
  • [4] NONSTOCHASTIC MULTI-ARMED BANDITS WITH GRAPH-STRUCTURED FEEDBACK
    Alon, Noga
    Cesa-Bianchi, Nicolo
    Gentile, Claudio
    Mannor, Shie
    Mansour, Yishay
    Shamir, Ohad
    SIAM JOURNAL ON COMPUTING, 2017, 46 (06) : 1785 - 1826
  • [5] Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
    Zimmert, Julian
    Seldin, Yevgeny
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [6] CORRUPTED CONTEXTUAL BANDITS: ONLINE LEARNING WITH CORRUPTED CONTEXT
    Bouneffouf, Djallel
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3145 - 3149
  • [7] Social Learning in Multi Agent Multi Armed Bandits
    Sankararaman A.
    Ganesh A.
    Shakkottai S.
    Performance Evaluation Review, 2020, 48 (01): : 29 - 30
  • [8] No Weighted-Regret Learning in Adversarial Bandits with Delays
    Bistritz, Ilai
    Zhou, Zhengyuan
    Chen, Xi
    Bambos, Nicholas
    Blanchet, Jose
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [9] Distributed Online Learning via Cooperative Contextual Bandits
    Tekin, Cem
    van der Schaar, Mihaela
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (14) : 3700 - 3714
  • [10] Clustering of Conversational Bandits for User Preference Learning and Elicitation
    Wu, Junda
    Zhao, Canzhe
    Yu, Tong
    Li, Jingyang
    Li, Shuai
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2129 - 2139