Mitigating starvation in dense WLANs: A multi-armed Bandit solution

被引:4
|
作者
Bardou, Anthony [1 ]
Begin, Thomas [1 ]
Busson, Anthony [1 ]
机构
[1] Univ Lyon, ENS Lyon, UCBL, CNRS,Inria,LIP,UMR 5668, 46 allee Italie, F-69007 Lyon, France
关键词
WLANs; Spatial reuse; Fairness; Reinforcement learning; Thompson sampling; Power control; Clear channel assessment; SPATIAL REUSE; NETWORKS;
D O I
10.1016/j.adhoc.2022.103015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent 802.11ax amendment to the IEEE standard commercialized as Wi-Fi 6, WLANs have the potential to greatly improve the spatial reuse of radio channels. This resorts to the new ability for APs (Access Points) to dynamically modify their transmission power as well as the signal energy threshold beyond which they consider the radio channel to be free or busy. In general, selecting adequate values for these parameters is complex because of (i) the high dimensionality of the problem and (ii) the uncertainty of the radio environment. To overcome these difficulties, we frame this problem as a MAB (Multi-Armed Bandit) problem and propose an efficient and robust solution using Thompson sampling, an original sampling of WLAN configurations, and a tailor-made reward function. We evaluate the efficiency of our solution as well as several other ones with scenarios inspired by real-life WLANs' deployments using the network simulator ns-3. The numerical results show the ability of our solution along with its superiority over the others at finding adequate parameterization at each AP thereby significantly improving the overall performance of WLANs.
引用
收藏
页数:13
相关论文
共 50 条
  • [22] Multi-Armed Bandit for Species Discovery: A Bayesian Nonparametric Approach
    Battiston, Marco
    Favaro, Stefano
    Teh, Yee Whye
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (521) : 455 - 466
  • [23] MABAT: A Multi-Armed Bandit Approach for Threat-Hunting
    Dekel, Liad
    Leybovich, Ilia
    Zilberman, Polina
    Puzis, Rami
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 477 - 490
  • [24] A Bayesian Multi-armed Bandit Approach for Identifying Human Vulnerabilities
    Miehling, Erik
    Xiao, Baicen
    Poovendran, Radha
    Basar, Tamer
    DECISION AND GAME THEORY FOR SECURITY, GAMESEC 2018, 2018, 11199 : 521 - 539
  • [25] An algorithm for multi-armed bandit based on variance change sensitivity
    Zhu, Canxin
    Yang, Jingmin
    Zhang, Wenjie
    Zheng, Yifeng
    ENGINEERING RESEARCH EXPRESS, 2024, 6 (02):
  • [26] Multi-armed bandit algorithms over DASH for multihomed client
    Hodroj, Ali
    Ibrahim, Marc
    Hadjadj-Aoul, Yassine
    Sericola, Bruno
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2021, 37 (04) : 244 - 253
  • [27] Terminal Selection Based on Multi-armed Bandit under Threatening Environment for Radio Environment Map Construction
    Gao, Ying
    Fujii, Takeo
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [28] A Simple Multi-Armed Nearest-Neighbor Bandit for Interactive Recommendation
    Sanz-Cruzado, Javier
    Castells, Pablo
    Lopez, Esther
    RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 358 - 362
  • [29] PAMLR: A Passive-Active Multi-Armed Bandit-Based Solution for LoRa Channel Allocation
    Yun, Jihoon
    Li, Chengzhang
    Arora, Anish
    PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2023, 2023, : 31 - 40
  • [30] Thompson Sampling Based Mechanisms for Stochastic Multi-Armed Bandit Problems
    Ghalme, Ganesh
    Jain, Shweta
    Gujar, Sujit
    Narahari, Y.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 87 - 95