DRLLA: Deep Reinforcement Learning for Link Adaptation

被引:4
|
作者
Geiser, Florian [1 ,2 ]
Wessel, Daniel [1 ]
Hummert, Matthias [3 ]
Weber, Andreas [4 ]
Wuebben, Dirk [3 ]
Dekorsy, Armin [3 ]
Viseras, Alberto [1 ]
机构
[1] Motius, D-80807 Munich, Germany
[2] Tech Univ Munich, Elect & Comp Engn, D-80333 Munich, Germany
[3] Univ Bremen, Dept Commun Engn, D-28359 Bremen, Germany
[4] Nokia Bell Labs, D-81541 Munich, Germany
来源
TELECOM | 2022年 / 3卷 / 04期
关键词
machine learning; mobile communication; reinforcement learning; link adaptation; channel observation;
D O I
10.3390/telecom3040037
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Link adaptation (LA) matches transmission parameters to conditions on the radio link, and therefore plays a major role in telecommunications. Improving LA is within the requirements for next-generation mobile telecommunication systems, and by refining link adaptation, a higher channel efficiency can be achieved (i.e., an increased data rate thanks to lower required bandwidth). Furthermore, by replacing traditional LA algorithms, radio transmission systems can better adapt themselves to a dynamic environment. There are several drawbacks to current state-of-the-art approaches, including predefined and static decision boundaries or relying on a single, low-dimensional metric. Nowadays, a broadly used approach to handle a variety of related input variables is a neural network (NN). NNs are able to make use of multiple inputs, and when combined with reinforcement learning (RL), the so-called deep reinforcement learning (DRL) approach emerges. Using DRL, more complex parameter relationships can be considered in order to recommend the modulation and coding scheme (MCS) used in LA. Hence, this work examines the potential of DRL and includes experiments on different channels. The main contribution of this work lies in using DRL algorithms for LA, optimized for throughput based on a subcarrier observation matrix and a packet success rate feedback system. We apply Natural Actor-Critic (NAC) and Proximal Policy Optimization (PPO) algorithms on simulated channels with a subsequent feasibility study on a prerecorded real-world channel. Empirical results produced by experiments on the examined channels hint that Deep Reinforcement Learning for Link Adaptation (DRLLA) offers good performance indicated by a promising data rate on the additive white Gaussian noise (AWGN) channel, the non-line-of-sight (NLOS) channel, and a prerecorded real-world channel. No matter the channel impairment, the agent is able to respond to changing signal-to-interference-plus-noise-ratio (SINR) levels, as exhibited by expected changes in the effective data rate.
引用
收藏
页码:692 / 705
页数:14
相关论文
共 50 条
  • [31] Dynamic Network Provisioning with Reinforcement Learning based on Link Stability
    Quach, Hong-Nam
    Yoem, Sungwoong
    Kim, Kyungbaek
    2021 22ND ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2021, : 242 - 245
  • [32] A Survey on Reinforcement Learning and Deep Reinforcement Learning for Recommender Systems
    Rezaei, Mehrdad
    Tabrizi, Nasseh
    DEEP LEARNING THEORY AND APPLICATIONS, DELTA 2023, 2023, 1875 : 385 - 402
  • [33] Domain Adaptation for Reinforcement Learning on the Atari
    Carr, Thomas
    Chli, Maria
    Vogiatzis, George
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1859 - 1861
  • [34] Reinforcement learning for dynamic multimedia adaptation
    Charvillat, Vincent
    Grigoras, Romulus
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2007, 30 (03) : 1034 - 1058
  • [35] A Survey on Deep Reinforcement Learning
    Liu Q.
    Zhai J.-W.
    Zhang Z.-Z.
    Zhong S.
    Zhou Q.
    Zhang P.
    Xu J.
    2018, Science Press (41): : 1 - 27
  • [36] Deep Reinforcement Learning in Medicine
    Jonsson, Anders
    KIDNEY DISEASES, 2019, 5 (01) : 18 - 22
  • [37] Implementation of Deep Reinforcement Learning
    Li, Meng-Jhe
    Li, An-Hong
    Huang, Yu-Jung
    Chu, Shao-I
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND SYSTEMS (ICISS 2019), 2019, : 232 - 236
  • [38] Deep reinforcement learning: a survey
    Hao-nan Wang
    Ning Liu
    Yi-yun Zhang
    Da-wei Feng
    Feng Huang
    Dong-sheng Li
    Yi-ming Zhang
    Frontiers of Information Technology & Electronic Engineering, 2020, 21 : 1726 - 1744
  • [39] Domain Adaptation in Reinforcement Learning: Approaches, Limitations, and Future Directions
    Wang B.
    Journal of The Institution of Engineers (India): Series B, 2024, 105 (05) : 1223 - 1240
  • [40] Deep reinforcement learning: a survey
    Wang, Hao-nan
    Liu, Ning
    Zhang, Yi-yun
    Feng, Da-wei
    Huang, Feng
    Li, Dong-sheng
    Zhang, Yi-ming
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2020, 21 (12) : 1726 - 1744