Information Sharing in Distributed Stochastic Bandits

被引:0
作者
Buccapatnam, Swapna [1 ,2 ]
Tan, Jian [2 ]
Zhang, Li [2 ]
机构
[1] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
来源
2015 IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (INFOCOM) | 2015年
关键词
MULTIARMED BANDIT;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Information sharing is an important issue for stochastic bandit problems in a distributed setting. Consider N players dealing with the same multi-armed bandit problem. All players receive requests simultaneously and must choose one of M actions for each request. Sharing information among these N players can decrease the regret for each of them but also incurs cooperation and communication overhead. In this setting, we study how cooperation and communication can impact the system performance measured by regret and communication cost. For both scenarios, we establish a uniform lower bound to the regret for the entire system as a function of time and network size. Concerning cooperation, we study the problem from a game-theoretic perspective. When each player's actions and payoffs are immediately visible to all others, we identify strategies for all players under which co-operative exploration is ensured. Regarding the communication cost, we consider incomplete information sharing such that a player's payoffs and actions are not entirely available to others. The players communicate observations to each other to reduce their regret, however with a cost. We show that a logarithmic communication cost is necessary to achieve the optimal regret. For Bernoulli arrivals, we specify a policy that achieves the optimal regret with a logarithmic communication cost. Our work opens a novel direction towards understanding information sharing for active learning in a distributed environment.
引用
收藏
页数:9
相关论文
共 13 条
  • [1] Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret
    Anandkumar, Animashree
    Michael, Nithin
    Tang, Kevin
    Swami, Ananthram
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2011, 29 (04) : 731 - 745
  • [2] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [3] Baumol WilliamJ., 1952, WELFARE EC THEORY ST
  • [4] Buccapatnam Swapna, 2014, ACM SIGMETRICS Performance Evaluation Review, V42, P289, DOI 10.1145/2591971.2591989
  • [5] Buccapatnam S, 2013, IEEE DECIS CONTR P, P7309, DOI 10.1109/CDC.2013.6761049
  • [6] Hillel Eshcar, 2013, PROC ADV NEURAL INF, P854
  • [7] Kanade Varun, 2012, P 25 INT C NEUR INF, V1, P260
  • [8] Kar S, 2011, IEEE DECIS CONTR P, P1771
  • [9] Bandit based Monte-Carlo planning
    Kocsis, Levente
    Szepesvari, Csaba
    [J]. MACHINE LEARNING: ECML 2006, PROCEEDINGS, 2006, 4212 : 282 - 293
  • [10] ASYMPTOTICALLY EFFICIENT ADAPTIVE ALLOCATION RULES
    LAI, TL
    ROBBINS, H
    [J]. ADVANCES IN APPLIED MATHEMATICS, 1985, 6 (01) : 4 - 22