Reinforcement Learning for Efficient and Tuning-Free Link Adaptation

被引:20
|
作者
Saxena, Vidit [1 ,2 ]
Tullberg, Hugo [2 ]
Jalden, Joakim [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, S-11428 Stockholm, Sweden
[2] Ericsson Res, S-16480 Stockholm, Sweden
基金
欧洲研究理事会;
关键词
Wireless communication; Interference; Signal to noise ratio; Reinforcement learning; Fading channels; Throughput; Channel estimation; Wireless networks; adaptive modulation and coding; reinforcement learning; thompson sampling; outer loop link adaptation; RATE SELECTION; COMPLEXITY; SYSTEMS;
D O I
10.1109/TWC.2021.3098972
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless links adapt the data transmission parameters to the dynamic channel state - this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.
引用
收藏
页码:768 / 780
页数:13
相关论文
共 50 条
  • [21] Recipe tuning by Reinforcement Learning in the SandS ecosystem
    Fernandez-Gauna, Borja
    Grana, Manuel
    2014 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL ASPECTS OF SOCIAL NETWORKS (CASON), 2014, : 55 - 60
  • [22] Dynamic Tuning of PI-Controllers based on Model-free Reinforcement Learning Methods
    Brujeni, Lena Abbasi
    Lee, Jong Min
    Shah, Sirish L.
    INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2010), 2010, : 453 - 458
  • [23] Iterative Learning for Reliable Link Adaptation in the Internet of Underwater Things
    Byun, Junghun
    Cho, Yong-Ho
    Im, Taeho
    Ko, Hak-Lim
    Shin, Kyungseop
    Kim, Juyeop
    Jo, Ohyun
    IEEE ACCESS, 2021, 9 : 30408 - 30416
  • [24] Learning to Score: Tuning Cluster Schedulers through Reinforcement Learning
    Asenov, Martin
    Deng, Qiwen
    Yeung, Gingfung
    Barker, Adam
    2023 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING, IC2E, 2023, : 113 - 120
  • [25] Reinforcement Learning for Efficient Scheduling in Complex Semiconductor Equipment
    Suerich, Doug
    Young, Terry
    2020 31ST ANNUAL SEMI ADVANCED SEMICONDUCTOR MANUFACTURING CONFERENCE (ASMC), 2020,
  • [26] TEMPORAL LINK PREDICTION VIA REINFORCEMENT LEARNING
    Tao, Ye
    Li, Ying
    Wu, Zhonghai
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3470 - 3474
  • [27] Reinforcement Learning Based Efficient Underwater Image Communication
    Su, Wei
    Tao, Jincheng
    Pei, Yuehua
    You, Xudong
    Xiao, Liang
    Cheng, En
    IEEE COMMUNICATIONS LETTERS, 2021, 25 (03) : 883 - 886
  • [28] Reinforcement learning approach to motion control of 2-link planer manipulator with a free joint
    Goto, T
    Kamaya, H
    Abe, K
    SICE 2004 ANNUAL CONFERENCE, VOLS 1-3, 2004, : 1774 - 1779
  • [29] Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation
    Shakerimov, Aidar
    Alizadeh, Tohid
    Varol, Huseyin Atakan
    IEEE ACCESS, 2023, 11 : 136809 - 136824
  • [30] Energy-Efficient Resource Allocation in Cognitive Radio Networks Under Cooperative Multi-Agent Model-Free Reinforcement Learning Schemes
    Kaur, Amandeep
    Kumar, Krishan
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (03): : 1337 - 1348