Distributed Stochastic Online Learning Policies for Opportunistic Spectrum Access

被引:54
作者
Gai, Yi [1 ]
Krishnamachari, Bhaskar [2 ]
机构
[1] Intel Labs, Hillsboro, OR 97124 USA
[2] Univ So Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Online learning; dynamic spectrum access; decentralized multi-armed bandit; EFFICIENT ALLOCATION RULES; MULTIARMED BANDIT PROBLEM; MULTIPLE PLAYS; ALGORITHMS;
D O I
10.1109/TSP.2014.2360821
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The fundamental problem of multiple secondary users contending for opportunistic spectrum access over multiple channels in cognitive radio networks has been formulated recently as a decentralized multi-armed bandit (D-MAB) problem. In a D-MAB problem there are M users and N arms (channels) that each offer i.i.d. stochastic rewards with unknown means so long as they are accessed without collision. The goal is to design distributed online learning policies that incur minimal regret. We consider two related problem formulations in this paper. First, we consider the setting where the users have a prioritized ranking, such that it is desired for the K-th-ranked user to learn to access the arm offering the K-th highest mean reward. For this problem, we present DLP, the first distributed policy that yields regret that is uniformly logarithmic over time without requiring any prior assumption about the mean rewards. Second, we consider the case when a fair access policy is required, i.e., it is desired for all users to experience the same mean reward. For this problem, we present DLF, a distributed policy that yields order-optimal regret scaling with respect to the number of users and arms, better than previously proposed policies in the literature. Both of our distributed policies make use of an innovative modification of the well-known UCB1 policy for the classic multi-armed bandit problem that allows a single user to learn how to play the arm that yields the KK-th largest mean reward.
引用
收藏
页码:6184 / 6193
页数:10
相关论文
共 18 条
  • [1] Multi-channel Opportunistic Access: A Case of Restless Bandits with Multiple Plays
    Ahmad, Sahand Haji Ali
    Liu, Mingyan
    [J]. 2009 47TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, VOLS 1 AND 2, 2009, : 1361 - 1368
  • [2] Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret
    Anandkumar, Animashree
    Michael, Nithin
    Tang, Kevin
    Swami, Ananthram
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2011, 29 (04) : 731 - 745
  • [3] Opportunistic Spectrum Access with Multiple Users: Learning under Competition
    Anandkumar, Animashree
    Michael, Nithin
    Tang, Ao
    [J]. 2010 PROCEEDINGS IEEE INFOCOM, 2010,
  • [4] ASYMPTOTICALLY EFFICIENT ALLOCATION RULES FOR THE MULTIARMED BANDIT PROBLEM WITH MULTIPLE PLAYS .2. MARKOVIAN REWARDS
    ANANTHARAM, V
    VARAIYA, P
    WALRAND, J
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1987, 32 (11) : 977 - 982
  • [5] ASYMPTOTICALLY EFFICIENT ALLOCATION RULES FOR THE MULTIARMED BANDIT PROBLEM WITH MULTIPLE PLAYS .1. IID REWARDS
    ANANTHARAM, V
    VARAIYA, P
    WALRAND, J
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1987, 32 (11) : 968 - 976
  • [6] [Anonymous], 2013, 2013 IEEE INF THEOR
  • [7] Finite-time analysis of the multiarmed bandit problem
    Auer, P
    Cesa-Bianchi, N
    Fischer, P
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 235 - 256
  • [8] Di Felice M., 2011, Global Telecommunications Conference (GLOBECOM 2011), P1
  • [9] Gai Y., 2011, PROC INF THEORY APPL, P1
  • [10] Gai Y., 2010, P IEEE S NEW FRONT D, P1