Radio and Energy Resource Management in Renewable Energy-Powered Wireless Networks With Deep Reinforcement Learning

被引:19
作者
Lee, Hyun-Suk [1 ]
Kim, Do-Yup [2 ]
Lee, Jang-Won [2 ]
机构
[1] Sejong Univ, Sch Intelligent Mechatron Engn, Seoul 05006, South Korea
[2] Yonsei Univ, Dept Elect & Elect Engn, Seoul 03722, South Korea
基金
新加坡国家研究基金会;
关键词
Wireless networks; Energy resources; Renewable energy sources; Resource management; Power control; Intercell interference; Complexity theory; Deep learning; reinforcement learning; renewable energy; resource management; wireless networks; MOBILE NETWORKS; IOT DEVICES; ALLOCATION; OPTIMIZATION;
D O I
10.1109/TWC.2022.3140731
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we study radio and energy resource management in renewable energy-powered wireless networks, where base stations (BSs) are powered by both on-grid and renewable energy sources and can share their harvested energy with each other. To efficiently manage those resources, we propose a hierarchical and distributed resource management framework based on deep reinforcement learning. The proposed framework minimizes the on-grid energy consumption while satisfying the data rate requirement of each user. It is composed of three different policies in a distributed and hierarchical way. An intercell interference coordination policy constrains the transmission power at each BS to coordinate the intercell interference among the BSs. Under the power constraints, a distributed radio resource allocation policy of each BS determines its own user scheduling and power control. Lastly, an energy sharing policy manages the energy resources of the BSs by sharing the harvested energy via power lines between them. Through the simulation, we demonstrate that the proposed framework can effectively reduce the on-grid energy consumption while satisfying the data rate requirements.
引用
收藏
页码:5435 / 5449
页数:15
相关论文
共 32 条
[1]  
Achiam J, 2019, Arxiv, DOI arXiv:1903.08894
[2]   Deep Learning for Radio Resource Allocation in Multi-Cell Networks [J].
Ahmed, K., I ;
Tabassum, H. ;
Hossain, E. .
IEEE NETWORK, 2019, 33 (06) :188-195
[3]  
Altman E., 1998, Constrained markov decision processes
[4]  
[Anonymous], 2010, 3GPP TR 36.814 V9.0.0
[5]   Channel Access and Power Control for Energy-Efficient Delay-Aware Heterogeneous Cellular Networks for Smart Grid Communications Using Deep Reinforcement Learning [J].
Asuhaimi, Fauzun Abdullah ;
Bu, Shengrong ;
Klaine, Paulo Valente ;
Imran, Muhammad Ali .
IEEE ACCESS, 2019, 7 :133474-133484
[6]   POWERING MOBILE NETWORKS WITH GREEN ENERGY [J].
Han, Tao ;
Ansari, Nirwan .
IEEE WIRELESS COMMUNICATIONS, 2014, 21 (01) :90-96
[7]   Joint Power Allocation and Channel Assignment for NOMA With Deep Reinforcement Learning [J].
He, Chaofan ;
Hu, Yang ;
Chen, Yan ;
Zeng, Bing .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (10) :2200-2210
[8]   Joint Optimization of Data Offloading and Resource Allocation With Renewable Energy Aware for IoT Devices: A Deep Reinforcement Learning Approach [J].
Ke, Hongchang ;
Wang, Jian ;
Wang, Hui ;
Ge, Yuming .
IEEE ACCESS, 2019, 7 :179349-179363
[9]   Adaptive Wireless Power Transfer Beam Scheduling for Non-Static IoT Devices Using Deep Reinforcement Learning [J].
Lee, Hyun-Suk ;
Lee, Jang-Won .
IEEE ACCESS, 2020, 8 :206659-206673
[10]   Resource Allocation in Wireless Networks With Deep Reinforcement Learning: A Circumstance-Independent Approach [J].
Lee, Hyun-Suk ;
Kim, Jin-Young ;
Lee, Jang-Won .
IEEE SYSTEMS JOURNAL, 2020, 14 (02) :2589-2592