Reinforcement Learning for Efficient and Tuning-Free Link Adaptation

被引:20
|
作者
Saxena, Vidit [1 ,2 ]
Tullberg, Hugo [2 ]
Jalden, Joakim [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, S-11428 Stockholm, Sweden
[2] Ericsson Res, S-16480 Stockholm, Sweden
基金
欧洲研究理事会;
关键词
Wireless communication; Interference; Signal to noise ratio; Reinforcement learning; Fading channels; Throughput; Channel estimation; Wireless networks; adaptive modulation and coding; reinforcement learning; thompson sampling; outer loop link adaptation; RATE SELECTION; COMPLEXITY; SYSTEMS;
D O I
10.1109/TWC.2021.3098972
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless links adapt the data transmission parameters to the dynamic channel state - this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.
引用
收藏
页码:768 / 780
页数:13
相关论文
共 50 条
  • [31] Domain Adaptation of Reinforcement Learning Agents based on Network Service Proximity
    Dey, Kaushik
    Perepu, Satheesh K.
    Dasgupta, Pallab
    Das, Abir
    2023 IEEE 9TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT, 2023, : 152 - 160
  • [32] Dialogue manager domain adaptation using Gaussian process reinforcement learning
    Gasic, Milica
    Mrksic, Nikola
    Rojas-Barahona, Lina M.
    Su, Pei-Hao
    Ultes, Stefan
    Vandyke, David
    Wen, Tsung-Hsien
    Young, Steve
    COMPUTER SPEECH AND LANGUAGE, 2017, 45 : 552 - 569
  • [33] CDBTune+: An efficient deep reinforcement learning-based automatic cloud database tuning system
    Zhang, Ji
    Zhou, Ke
    Li, Guoliang
    Liu, Yu
    Xie, Ming
    Cheng, Bin
    Xing, Jiashu
    VLDB JOURNAL, 2021, 30 (06) : 959 - 987
  • [34] Unsupervised Basis Function Adaptation for Reinforcement Learning
    Barker, Edward
    Ras, Charl
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [35] Scalable AP Clustering With Deep Reinforcement Learning for Cell-Free Massive MIMO
    Tsukamoto, Yu
    Ikami, Akio
    Murakami, Takahide
    Amrallah, Amr
    Shinbo, Hiroyuki
    Amano, Yoshiaki
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 1552 - 1567
  • [36] Deep Reinforcement Learning for Throughput Improvement of the Uplink Grant-Free NOMA System
    Zhang, Jiazhen
    Tao, Xiaofeng
    Wu, Huici
    Zhang, Ning
    Zhang, Xuefei
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) : 6369 - 6379
  • [37] Learning how to learn by self-tuning reinforcement
    Torsell, Christian
    Barrett, Jeffrey A.
    SYNTHESE, 2024, 203 (06)
  • [38] Budget-aware Index Tuning with Reinforcement Learning
    Wu, Wentao
    Wang, Chi
    Siddiqui, Tarique
    Wang, Junxiong
    Narasayya, Vivek
    Chaudhuri, Surajit
    Bernstein, Philip A.
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1528 - 1541
  • [39] Automatic Tuning of CT Imaging Parameters with Reinforcement Learning
    Zhang, Haoyu
    Shen, Le
    Xing, Yuxiang
    MEDICAL IMAGING 2024: PHYSICS OF MEDICAL IMAGING, PT 1, 2024, 12925
  • [40] Learning Human Strategies for Tuning Cavity Filters with Continuous Reinforcement Learning
    Wang, Zhiyang
    Ou, Yongsheng
    APPLIED SCIENCES-BASEL, 2022, 12 (05):