Learning to Adapt: Communication Load Balancing via Adaptive Deep Reinforcement Learning

被引:0
作者
Wu, Di [1 ]
Xu, Yi Tian [1 ]
Li, Jimmy [1 ]
Jenkin, Michael [1 ]
Hossain, Ekram [1 ]
Jang, Seowoo [2 ]
Xin, Yan [3 ]
Zhang, Charlie [3 ]
Liu, Xue [1 ]
Dudek, Gregory [1 ]
机构
[1] Samsung Ctr Montreal, Montreal, PQ, Canada
[2] Samsung Elect, Seoul, South Korea
[3] Samsung Res Amer, Mountain View, CA USA
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
关键词
5G Advanced/6G; idle mode load balancing; deep reinforcement learning;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The association of mobile devices with network resources (e.g., base stations, frequency bands/channels), known as load balancing, is critical to reduce communication traffic congestion and network performance. Reinforcement learning (RL) has shown to be effective for communication load balancing and achieves better performance than currently used rule-based methods, especially when the traffic load changes quickly. However, RL-based methods usually need to interact with the environment for a large number of time steps to learn an effective policy and can be difficult to tune. In this work, we aim to improve the data efficiency of RL-based solutions to make them more suitable and applicable for real-world applications. Specifically, we propose a simple, yet efficient and effective deep RL-based wireless network load balancing framework. In this solution, a set of good initialization values for control actions are selected with some cost-efficient approach to center the training of the RL agent. Then, a deep RL-based agent is trained to find offsets from the initialization values that optimize the load balancing problem. Experimental evaluation on a set of dynamic traffic scenarios demonstrates the effectiveness and efficiency of the proposed method.
引用
收藏
页码:2973 / 2978
页数:6
相关论文
共 31 条
[1]  
Akama S, 2014, 2014 IEEE INTERNATIONAL CONFERENCE ON GRANULAR COMPUTING (GRC), P1, DOI 10.1109/GRC.2014.6982797
[2]  
[Anonymous], 2009, IEEE ICC
[3]  
[Anonymous], 2019, CISCO VISUAL NETWORK
[4]  
[Anonymous], 2021, IEEE INT C COMM, DOI DOI 10.1145/3488042.3490021
[5]  
[Anonymous], 2010, VTC SPRING
[6]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[7]  
Hemmer H., 2019, ERICSSON MOBILITY RE
[8]   Self-Configuration and Self-Optimization for LTE Networks [J].
Hu, Honglin ;
Zhang, Jian ;
Zheng, Xiaoying ;
Yang, Yang ;
Wu, Ping .
IEEE COMMUNICATIONS MAGAZINE, 2010, 48 (02) :94-100
[9]  
Huang X., 2021, ARXIV211108067
[10]  
Jansen T, 2010, IEEE VTS VEH TECHNOL