Base station power control strategy in ultra-dense networks via deep reinforcement learning

被引:0
作者
Chen, Qi [1 ]
Bao, Xuehan [1 ]
Chen, Shan [1 ]
Zhao, Junhui [1 ,2 ]
机构
[1] East China Jiaotong Univ, Sch Informat & Software Engn, Nanchang 330013, Peoples R China
[2] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Ultra-dense networks (UDNs); Base station sleep; Power allocation; Energy efficiency (EE); Spectral efficiency (SE); ALLOCATION; SLEEP;
D O I
10.1016/j.phycom.2025.102655
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Within the context of 5G, Ultra-Dense Networks (UDNs) are regarded as an important network deployment strategy, employing a large number of low-power small cells to achieve extended coverage and enhanced service quality. However, the deployment of numerous small cells results in a linear increase in energy consumption in wireless communication systems. To enhance system efficiency and establish green wireless communication systems, this paper investigates base station sleeping and power allocation strategy based on deep reinforcement learning in UDNs. Firstly, a system energy consumption model for UDNs is established, which is divided into two sub-problems based on the final optimization problem, namely base station sleep and power allocation. Two Deep Q-networks (DQNs) are employed simultaneously for optimization. In addition to considering traditional system energy efficiency (EE), this study also optimizes system spectral efficiency (SE) and user transmission rate as optimization objectives simultaneously. Simulation results show that the proposed method improves EE and SE by about 70% and 81%.
引用
收藏
页数:9
相关论文
共 34 条
[11]   Energy-Efficient Ultra-Dense Network Using LSTM-based Deep Neural Networks [J].
Kim, Seungnyun ;
Son, Junwon ;
Shim, Byonghyo .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (07) :4702-4715
[12]  
Li H., 2018, 2018 IEEE INT C COMM, P1
[13]   Delay-constrained sleeping mechanism for energy saving in cache-aided ultra-dense network [J].
Li, Pei ;
Gong, Shulei ;
Gao, Shen ;
Hu, Yaoyue ;
Pan, Zhiwen ;
You, Xiaohu .
SCIENCE CHINA-INFORMATION SCIENCES, 2019, 62 (08)
[14]   Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning [J].
Liang, Le ;
Ye, Hao ;
Li, Geoffrey Ye .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (10) :2282-2292
[15]   DeepNap: Data-Driven Base Station Sleeping Operations Through Deep Reinforcement Learning [J].
Liu, Jingchu ;
Krishnamachari, Bhaskar ;
Zhou, Sheng ;
Niu, Zhisheng .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (06) :4273-4282
[16]   Base Station Sleep and Spectrum Allocation in Heterogeneous Ultra-dense Networks [J].
Liu, Qiaoshou ;
Shi, Jiangpan .
WIRELESS PERSONAL COMMUNICATIONS, 2018, 98 (04) :3611-3627
[17]  
Liu Zhuohan, 2023, 2023 IEEE 23rd International Conference on Communication Technology (ICCT), P1503, DOI 10.1109/ICCT59356.2023.10419675
[18]   Digital Twin Assisted Risk-Aware Sleep Mode Management Using Deep Q-Networks [J].
Masoudi, Meysam ;
Soroush, Ebrahim ;
Zander, Jens ;
Cavdar, Cicek .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (01) :1224-1239
[19]  
Meng Yue, 2023, 2023 IEEE 23rd International Conference on Communication Technology (ICCT), P1462, DOI 10.1109/ICCT59356.2023.10419507
[20]   Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme [J].
Ning, Zhaolong ;
Dong, Peiran ;
Wang, Xiaojie ;
Guo, Liang ;
Rodrigues, Joel ;
Kong, Xiangjie ;
Huang, Jun ;
Kwok, Ricky Y. K. .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) :1060-1072