Optimal Learning for Structured Bandits

被引:0
|
作者
Van Parys, Bart [1 ]
Golrezaei, Negin [1 ]
机构
[1] MIT, Sloan Sch Management, Cambridge, MA 02142 USA
关键词
structured bandits; online learning; convex duality; mimicking regret lower bound;
D O I
10.1287/mnsc.2020.02108
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
We study structured multiarmed bandits, which is the problem of online decision-making under uncertainty in the presence of structural information. In this problem, the decision-maker needs to discover the best course of action despite observing only uncertain rewards over time. The decision-maker is aware of certain convex structural information regarding the reward distributions; that is, the decision-maker knows that the reward distributions of the arms belong to a convex compact set. In the presence of such structural information, the decision-maker then would like to minimize his or her regret by exploiting this information, where the regret is its performance difference against a benchmark policy that knows the best action ahead of time. In the absence of structural information, the classical upper confidence bound (UCB) and Thomson sampling algorithms are well known to suffer minimal regret. However, as recently pointed out by Russo and Van Roy (2018) and Lattimore and Szepesvari (2017), neither algorithm is capable of exploiting structural information that is commonly available in practice. We propose a novel learning algorithm that we call "DUSA," whose regret matches the information-theoretic regret lower bound up to a constant factor and can handle a wide range of structural information. Our algorithm DUSA solves a dual counterpart of the regret lower bound at the empirical reward distribution and follows its suggested play. We show that this idea leads to the first computationally viable learning policy with asymptotic minimal regret for various structural information, including well-known structured bandits such as linear, Lipschitz, and convex bandits and novel structured bandits that have not been studied in the literature because of the lack of a unified and flexible framework.
引用
收藏
页码:3951 / 3998
页数:49
相关论文
共 50 条
  • [41] Linear Online Learning over Structured Data with Distributed Tree Kernels
    Filice, Simone
    Croce, Danilo
    Basili, Roberto
    Zanzotto, Fabio Massimo
    2013 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2013), VOL 1, 2013, : 123 - 128
  • [42] Visual tracking with tree-structured appearance model for online learning
    Lv, Yun-Qiu
    Liu, Kai
    Cheng, Fei
    Li, Wei
    IET IMAGE PROCESSING, 2019, 13 (12) : 2106 - 2115
  • [43] Multi-Armed Bandits With Costly Probes
    Elumar, Eray Can
    Tekin, Cem
    Yagan, Osman
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2025, 71 (01) : 618 - 643
  • [44] Exploring Clustering of Bandits for Online Recommendation System
    Yang, Liu
    Liu, Bo
    Lin, Leyu
    Xia, Feng
    Chen, Kai
    Yang, Qiang
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 120 - 129
  • [45] Multi-Player Bandits: The Adversarial Case
    Alatur, Pragnya
    Levy, Kfir Y.
    Krause, Andreas
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [46] Multi-Armed Bandits With Correlated Arms
    Gupta, Samarth
    Chaudhari, Shreyas
    Joshi, Gauri
    Yagan, Osman
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2021, 67 (10) : 6711 - 6732
  • [47] The K-armed dueling bandits problem
    Yue, Yisong
    Broder, Josef
    Kleinberg, Robert
    Joachims, Thorsten
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2012, 78 (05) : 1538 - 1556
  • [48] Complete Policy Regret Bounds for Tallying Bandits
    Malik, Dhruv
    Li, Yuanzhi
    Singh, Aarti
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [49] Markovian Restless Bandits and Index Policies: A Review
    Nino-Mora, Jose
    MATHEMATICS, 2023, 11 (07)
  • [50] One Arm to Rule Them All: Online Learning with Multi-armed Bandits for Low-Resource Conversational Agents
    Mendonca, Vania
    Coheur, Luisa
    Sardinha, Alberto
    PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021), 2021, 12981 : 625 - 634