Reinforcement Learning for Efficient and Tuning-Free Link Adaptation

被引:20
|
作者
Saxena, Vidit [1 ,2 ]
Tullberg, Hugo [2 ]
Jalden, Joakim [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, S-11428 Stockholm, Sweden
[2] Ericsson Res, S-16480 Stockholm, Sweden
基金
欧洲研究理事会;
关键词
Wireless communication; Interference; Signal to noise ratio; Reinforcement learning; Fading channels; Throughput; Channel estimation; Wireless networks; adaptive modulation and coding; reinforcement learning; thompson sampling; outer loop link adaptation; RATE SELECTION; COMPLEXITY; SYSTEMS;
D O I
10.1109/TWC.2021.3098972
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless links adapt the data transmission parameters to the dynamic channel state - this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.
引用
收藏
页码:768 / 780
页数:13
相关论文
共 50 条
  • [41] A Framework for Automated Cellular Network Tuning With Reinforcement Learning
    Mismar, Faris B.
    Choi, Jinseok
    Evans, Brian L.
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2019, 67 (10) : 7152 - 7167
  • [42] On Efficient Sampling in Offline Reinforcement Learning
    Jia, Qing-Shan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1 - 6
  • [43] Distributed Reinforcement Learning for Flexible and Efficient UAV Swarm Control
    Venturini, Federico
    Mason, Federico
    Pase, Francesco
    Chiariotti, Federico
    Testolin, Alberto
    Zanella, Andrea
    Zorzi, Michele
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (03) : 955 - 969
  • [44] Behavior Priors for Efficient Reinforcement Learning
    Tirumala, Dhruva
    Galashov, Alexandre
    Noh, Hyeonwo
    Hasenclever, Leonard
    Pascanu, Razvan
    Schwarz, Jonathan
    Desjardins, Guillaume
    Czarnecki, Wojciech Marian
    Ahuja, Arun
    Teh, Yee Whye
    Heess, Nicolas
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [45] Reinforcement Learning for Energy-Efficient Trajectory Design of UAVs
    Arani, Atefeh Hajijamali
    Azari, M. Mahdi
    Hu, Peng
    Zhu, Yeying
    Yanikomeroglu, Halim
    Safavi-Naeini, Safieddin
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (11): : 9060 - 9070
  • [46] Reinforcement Learning for Efficient and Fair Coexistence Between LTE-LAA and Wi-Fi
    Han, Mengqi
    Khairy, Sami
    Cai, Lin X.
    Cheng, Yu
    Zhang, Ran
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (08) : 8764 - 8776
  • [47] Fast Reinforcement Learning for Energy-Efficient Wireless Communication
    Mastronarde, Nicholas
    van der Schaar, Mihaela
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (12) : 6262 - 6266
  • [48] EFFICIENT ABSTRACTION SELECTION IN REINFORCEMENT LEARNING
    van Seijen, Harm
    Whiteson, Shimon
    Kester, Leon
    COMPUTATIONAL INTELLIGENCE, 2014, 30 (04) : 657 - 699
  • [49] Tuning-Free Bayesian Estimation Algorithms for Faulty Sensor Signals in State-Space
    Zhao, Shunyi
    Li, Ke
    Ahn, Choon Ki
    Huang, Biao
    Liu, Fei
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2023, 70 (01) : 921 - 929
  • [50] Reinforcement-learning-based parameter adaptation method for particle swarm optimization
    Yin, Shiyuan
    Jin, Min
    Lu, Huaxiang
    Gong, Guoliang
    Mao, Wenyu
    Chen, Gang
    Li, Wenchang
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (05) : 5585 - 5609