Performance Optimization of Energy-Harvesting Underlay Cognitive Radio Networks Using Reinforcement Learning

被引:11
作者
Tashman, Deemah H. [1 ]
Cherkaoui, Soumaya [1 ]
Hamouda, Walaa [2 ]
机构
[1] Polytechn Montreal, Dept Comp & Software Engn, Montreal, PQ, Canada
[2] Concordia Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
来源
2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC | 2023年
关键词
Energy harvesting; reinforcement learning; underlay cognitive radio networks; INTERNET;
D O I
10.1109/IWCMC58020.2023.10182973
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In this paper, a reinforcement learning technique is employed to maximize the performance of a cognitive radio network (CRN). In the presence of primary users (PUs), it is presumed that two secondary users (SUs) access the licensed band within underlay mode. In addition, the SU transmitter is assumed to be an energy-constrained device that requires harvesting energy in order to transmit signals to their intended destination. Therefore, we propose that there are two main sources of energy; the interference of PUs' transmissions and ambient radio frequency (RF) sources. The SU will select whether to gather energy from PUs or only from ambient sources based on a predetermined threshold. The process of energy harvesting from the PUs' messages is accomplished via the time switching approach. In addition, based on a deep Q-network (DQN) approach, the SU transmitter determines whether to collect energy or transmit messages during each time slot as well as selects the suitable transmission power in order to maximize its average data rate. Our approach outperforms a baseline strategy and converges, as shown by our findings.
引用
收藏
页码:1160 / 1165
页数:6
相关论文
共 21 条
[1]   A Deep Reinforcement Learning Approach for Service Migration in MEC-enabled Vehicular Networks [J].
Abouaomar, Amine ;
Mlika, Zoubeir ;
Filali, Abderrahime ;
Cherkaoui, Soumaya ;
Kobbane, Abdellatif .
PROCEEDINGS OF THE IEEE 46TH CONFERENCE ON LOCAL COMPUTER NETWORKS (LCN 2021), 2021, :273-280
[2]   Ensuring trustworthy spectrum sensing in cognitive radio networks [J].
Chen, Ruiliang ;
Park, Jung-Min .
2006 1ST IEEE WORKSHOP ON NETWORKING TECHNOLOGIES FOR SOFTWARE DEFINED RADIO NETWORKS, 2006, :110-+
[3]   No-Pain No-Gain: DRL Assisted Optimization in Energy-Constrained CR-NOMA Networks [J].
Ding, Zhiguo ;
Schober, Robert ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (09) :5917-5932
[4]   Dynamic SDN-Based Radio Access Network Slicing With Deep Reinforcement Learning for URLLC and eMBB Services [J].
Filali, Abderrahime ;
Mlika, Zoubeir ;
Cherkaoui, Soumaya ;
Kobbane, Abdellatif .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (04) :2174-2187
[5]   Deep Reinforcement Learning Optimal Transmission Algorithm for Cognitive Internet of Things With RF Energy Harvesting [J].
Guo, Shaoai ;
Zhao, Xiaohui .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) :1216-1227
[6]  
Hosey N., 2009, P CHIN IR INF COMM T, P1
[7]  
Huan Xie, 2021, 2021 5th International Conference on Communication and Information Systems (ICCIS), P45, DOI 10.1109/ICCIS53528.2021.9645987
[8]  
Khalek NA, 2021, IEEE NETWORK, V35, P168, DOI [10.1109/MNET.011.2000497, 10.1109/MNET.011.2000504]
[9]   Deep Reinforcement Learning for Autonomous Driving: A Survey [J].
Kiran, B. Ravi ;
Sobh, Ibrahim ;
Talpaert, Victor ;
Mannion, Patrick ;
Al Sallab, Ahmad A. ;
Yogamani, Senthil ;
Perez, Patrick .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) :4909-4926
[10]  
Mlika Z, 2021, IEEE NETWORK, V35, P132, DOI [10.1109/MNET.011.2000502, 10.1109/MNET.011.2000591]