Base station power control strategy in ultra-dense networks via deep reinforcement learning

被引:0
作者
Chen, Qi [1 ]
Bao, Xuehan [1 ]
Chen, Shan [1 ]
Zhao, Junhui [1 ,2 ]
机构
[1] East China Jiaotong Univ, Sch Informat & Software Engn, Nanchang 330013, Peoples R China
[2] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Ultra-dense networks (UDNs); Base station sleep; Power allocation; Energy efficiency (EE); Spectral efficiency (SE); ALLOCATION; SLEEP;
D O I
10.1016/j.phycom.2025.102655
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Within the context of 5G, Ultra-Dense Networks (UDNs) are regarded as an important network deployment strategy, employing a large number of low-power small cells to achieve extended coverage and enhanced service quality. However, the deployment of numerous small cells results in a linear increase in energy consumption in wireless communication systems. To enhance system efficiency and establish green wireless communication systems, this paper investigates base station sleeping and power allocation strategy based on deep reinforcement learning in UDNs. Firstly, a system energy consumption model for UDNs is established, which is divided into two sub-problems based on the final optimization problem, namely base station sleep and power allocation. Two Deep Q-networks (DQNs) are employed simultaneously for optimization. In addition to considering traditional system energy efficiency (EE), this study also optimizes system spectral efficiency (SE) and user transmission rate as optimization objectives simultaneously. Simulation results show that the proposed method improves EE and SE by about 70% and 81%.
引用
收藏
页数:9
相关论文
共 34 条
[1]  
Alqasir A, 2018, IEEE ICC
[2]   Energy-Efficient Sleep Strategy With Variant Sleep Depths for Open-Access Femtocell Networks [J].
Chang, Wenson ;
Cheng, Wen-Yen ;
Meng, Zhao-Ting ;
Su, Szu-Lin .
IEEE COMMUNICATIONS LETTERS, 2019, 23 (04) :708-711
[3]  
Cheng Z., 2020, 2020 IEEE 91 VEH TEC, P1
[4]   Control Strategy of Heterogeneous Network Base Station Energy Saving and Energy Storage Regulation Base on Genetic Algorithm [J].
Ding, Gangwei ;
Li, Lijuan ;
Li, Yue ;
Wang, Xin ;
Liu, Hai .
2022 25TH INTERNATIONAL CONFERENCE ON ELECTRICAL MACHINES AND SYSTEMS (ICEMS 2022), 2022,
[5]   A deep reinforcement learning for user association and power control in heterogeneous networks [J].
Ding, Hui ;
Zhao, Feng ;
Tian, Jie ;
Li, Dongyang ;
Zhang, Haixia .
AD HOC NETWORKS, 2020, 102
[6]   A review of machine learning techniques for enhanced energy efficient 5G and 6G communications [J].
Fowdur, Tulsi Pawan ;
Doorgakant, Bhuvaneshwar .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 122
[7]   Increasing energy efficiency of Massive-MIMO network via base stations switching using reinforcement learning and radio environment maps [J].
Hoffmann, Marcin ;
Kryszkiewicz, Pawel ;
Kliks, Adrian .
COMPUTER COMMUNICATIONS, 2021, 169 :232-242
[8]   Energy-Efficient Ultra-Dense Network With Deep Reinforcement Learning [J].
Ju, Hyungyu ;
Kim, Seungnyun ;
Kim, Youngjoon ;
Shim, Byonghyo .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (08) :6539-6552
[9]   Stochastic modelling of sleeping strategy in 5G base station for energy efficiency [J].
Kalita, Priyanka ;
Selvamuthu, Dharmaraja .
TELECOMMUNICATION SYSTEMS, 2023, 83 (02) :115-133
[10]   Ultra-Dense Networks: A Survey [J].
Kamel, Mahmoud ;
Hamouda, Walaa ;
Youssef, Amr .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2016, 18 (04) :2522-2545