Deep Reinforcement Learning Based Resource Allocation for LoRaWAN

被引:3
作者
Li, Aohan [1 ]
机构
[1] Univ Electrocommun, Grad Sch Informat & Engn, Tokyo, Japan
来源
2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL) | 2022年
关键词
LoRaWAN; Resource Management; Deep Reinforcement Learning; Energy Efficiency;
D O I
10.1109/VTC2022-Fall57202.2022.10012698
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
It is predicted that the number of Internet of Things (IoT) devices will be more than 75 billion by 2025, where a large portion of IoT devices will be long-range (LoRa) powered by batteries. The battery lifetime limitations and the spectrum shortage have been the main problems in realizing LoRa wide area network (LoRAWAN) for the devices in hard-to-reach areas. The dynamic spectrum access technique has gained tremendous research interest as a promising paradigm due to its outstanding performance in improving spectrum efficiency. How to realize intelligent resource allocation (RA) to avoid collisions among IoT devices with low energy consumption is an important problem in LoRaWAN. However, either synchronization and prior information estimation, such as channel state information (CSI), are required, or the energy consumption of LoRa devices is not considered in related work, which may decrease the energy efficiency of the LoRa devices. In addition, the necessary prior information may be challenging to obtain in future networks. To address these issues, we propose a deep Q learning-based RA (DQLRA) method for LoRaWAN. In our proposed method, the gateway (GW) trains the deep neural network (DNN) only based on the transmission state, i.e., transmission failure or success, and the corresponding device number of each LoRa device. Then, each LoRa device can make decisions based on its device number and ACK or NACK information using the trained DNN. Synchronization and prior information estimation are not required in our proposed method, which may improve the energy efficiency of IoT devices. Simulation results show that the proposed method can achieve the optimal frame success rate (FSR) in most scenarios.
引用
收藏
页数:4
相关论文
共 15 条
[1]   The Frontiers of Deep Reinforcement Learning for Resource Management in Future Wireless HetNets: Techniques, Challenges, and Research Directions [J].
Alwarafy, Abdulmalik ;
Abdallah, Mohamed ;
Ciftler, Bekir Sait ;
Al-Fuqaha, Ala ;
Hamdi, Mounir .
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2022, 3 :322-365
[2]  
[Anonymous], Internet of Things (IoT) and non-IoT Active Device Connections Worldwide from 2010 to 2025
[3]  
[Anonymous], Neural Networks for Machine Learning Lecture 6a Overview of mini-batch gradient descent
[4]   Batteryless LoRaWAN Communications Using Energy Harvesting: Modeling and Characterization [J].
Delgado, Carmen ;
Sanz, Jose Maria ;
Blondia, Chris ;
Famaey, Jeroen .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) :2694-2711
[5]   LoRaWAN Scheduling: From Concept to Implementation [J].
Garrido-Hidalgo, Celia ;
Haxhibeqiri, Jetmir ;
Moons, Bart ;
Hoebeke, Jeroen ;
Olivares, Teresa ;
Javier Ramirez, F. ;
Fernandez-Caballero, Antonio .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (16) :12919-12933
[6]   Deep Reinforcement Learning Optimal Transmission Algorithm for Cognitive Internet of Things With RF Energy Harvesting [J].
Guo, Shaoai ;
Zhao, Xiaohui .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) :1216-1227
[7]   LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks [J].
Hamdi, Rami ;
Baccour, Emna ;
Erbad, Aiman ;
Qaraqe, Marwa ;
Hamdi, Mounir .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (09) :6458-6476
[8]   Enabling LPWAN Massive Access: Grant-Free Random Access with Massive MIMO [J].
Jiang, Hao ;
Qu, Daiming ;
Ding, Jie ;
Wang, Zhibing ;
He, Hui ;
Chen, Hongming .
IEEE WIRELESS COMMUNICATIONS, 2022, 29 (04) :72-77
[9]   Radio and Energy Resource Management in Renewable Energy-Powered Wireless Networks With Deep Reinforcement Learning [J].
Lee, Hyun-Suk ;
Kim, Do-Yup ;
Lee, Jang-Won .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (07) :5435-5449
[10]  
Li A, 2022, Arxiv, DOI arXiv:2208.01824