A Framework for Automated Cellular Network Tuning With Reinforcement Learning

被引:20
|
作者
Mismar, Faris B. [1 ]
Choi, Jinseok [1 ]
Evans, Brian L. [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Wireless Networking & Commun Grp, Austin, TX 78712 USA
关键词
Framework; reinforcement learning; artificial intelligence; VoLTE; MOS; QoE; wireless; tuning; optimization; SON;
D O I
10.1109/TCOMM.2019.2926715
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the performance for indoor and outdoor environments. By leveraging the ability of Q-learning to estimate future performance improvement rewards, we propose two algorithms: 1) closed loop power control (PC) for downlink voice over LTE (VoLTE) and 2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the signal-to-interference plus noise ratio (SINR) of a user equipment (UE) meets the target SINR. It does so without the UE having to send power control requests. The SON fault management algorithm uses RL to improve the performance of an outdoor base station cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex performance optimization problem using RL. Simulation results show that our proposed RL-based algorithms outperform the industry standards today in realistic cellular communication environments.
引用
收藏
页码:7152 / 7167
页数:16
相关论文
共 50 条
  • [1] Reinforcement Learning for Automated Energy Efficient Mobile Network Performance Tuning
    Corcoran, Diarmuid
    Kreuger, Per
    Boman, Magnus
    PROCEEDINGS OF THE 2021 17TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM 2021): SMART MANAGEMENT FOR FUTURE NETWORKS AND SERVICES, 2021, : 216 - 224
  • [2] Automated Hyperparameter Tuning in Reinforcement Learning for Quadrupedal Robot Locomotion
    Kim, Myeongseop
    Kim, Jung-Su
    Park, Jae-Han
    ELECTRONICS, 2024, 13 (01)
  • [3] Multi-Modal Legged Locomotion Framework With Automated Residual Reinforcement Learning
    Yu, Chen
    Rosendo, Andre
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10312 - 10319
  • [4] Automated Design of Metaheuristics Using Reinforcement Learning Within a Novel General Search Framework
    Yi, Wenjie
    Qu, Rong
    Jiao, Licheng
    Niu, Ben
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2023, 27 (04) : 1072 - 1084
  • [5] Tuning Apex DQN: A Reinforcement Learning based Deep Q-Network Algorithm
    Ruhela, Dhani
    Ruhela, Amit
    PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2024, PEARC 2024, 2024,
  • [6] Learning to Score: Tuning Cluster Schedulers through Reinforcement Learning
    Asenov, Martin
    Deng, Qiwen
    Yeung, Gingfung
    Barker, Adam
    2023 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING, IC2E, 2023, : 113 - 120
  • [7] A Generalized Framework for Reinforcement Learning.
    Schönekehs C.R.
    Witt R.
    Schmitt R.H.
    ZWF Zeitschrift fuer Wirtschaftlichen Fabrikbetrieb, 2023, 118 (11): : 795 - 800
  • [8] A Unifying Framework for Reinforcement Learning and Planning
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [9] Automated Flowsheet Synthesis Using Hierarchical Reinforcement Learning: Proof of Concept
    Gottl, Quirin
    Tonges, Yannic
    Grimm, Dominik G.
    Burger, Jakob
    CHEMIE INGENIEUR TECHNIK, 2021, 93 (12) : 2010 - 2018
  • [10] Model-Based Reinforcement Learning with Automated Planning for Network Management
    Ordonez, Armando
    Mauricio Caicedo, Oscar
    Villota, William
    Rodriguez-Vivas, Angela
    da Fonseca, Nelson L. S.
    SENSORS, 2022, 22 (16)