Deep Reinforcement Learning Based Energy Efficient Multi-UAV Data Collection for IoT Networks

被引:24
作者
Khodaparast, Seyed Saeed [1 ]
Lu, Xiao [1 ]
Wang, Ping [1 ]
Uyen Trang Nguyen [1 ]
机构
[1] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON M3J 1P3, Canada
来源
IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY | 2021年 / 2卷
基金
加拿大自然科学与工程研究理事会;
关键词
Sensors; Data collection; Energy consumption; Trajectory; Navigation; Unmanned aerial vehicles; Task analysis; unmanned aerial vehicle (UAV); Internet of Things (IoT); deep reinforcement learning (DRL); energy consumption; AUTONOMOUS NAVIGATION;
D O I
10.1109/OJVT.2021.3085421
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned aerial vehicles (UAVs) are regarded as an emerging technology, which can be effectively utilized to perform the data collection tasks in the Internet of Things (IoT) networks. However, both the UAVs and the sensors in these networks are energy-limited devices, which necessitates an energy-efficient data collection procedure to ensure the network lifetime. In this paper, we propose a multi-UAV-assisted network, where the UAVs fly to the ground sensors and control the sensor's transmit power during the data collection time. Our goal is to minimize the total energy consumption of the UAVs and the sensors, which is needed to accomplish the data collection mission. We formulate this problem into three sub-problems of single UAV navigation, sensor power control as well as multi-UAV scheduling and model each part as a finite-horizon Markov Decision Process (MDP). We deploy deep reinforcement learning (DRL)-based frameworks to solve each part. Specifically, we use deep deterministic policy gradient (DDPG) method to generate the best trajectory for the UAVs in an obstacle-constraint environment, given its starting position and the target sensor. We also deploy DDPG to control the sensor's transmit power during data collection. To schedule activity plans for each UAV to visit the sensors, we propose a multi-agent deep Q-learning (DQL) approach by taking the total energy consumption of the UAVs on each path into account. Our simulations show that the UAVs can find a safe and optimal path for each of their trips. Continuous power control of the sensors achieves better performance over the fixed power approaches in terms of the total energy consumption during data collection. In addition, compared to the two commonly used baselines, our scheduling framework achieves better and near-optimal results.
引用
收藏
页码:249 / 260
页数:12
相关论文
共 30 条
[1]  
Abd-Elmagid M. A., 2019, PROC IEEE GLOB COMMU, P1
[2]   Multi-UAV Data Collection Framework for Wireless Sensor Networks [J].
Alfattani, Safwan ;
Jaafar, Wael ;
Yanikomeroglu, Halim ;
Yongacoglu, Abbas .
2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
[3]   UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement Learning Approach [J].
Bayerlein, Harald ;
Theile, Mirco ;
Caccamo, Marco ;
Gesbert, David .
2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
[4]   Joint Position and Travel Path Optimization for Energy Efficient Wireless Data Gathering Using Unmanned Aerial Vehicles [J].
Ben Ghorbel, Mahdi ;
Rodriguez-Duarte, David ;
Ghazzai, Hakim ;
Hossain, Md. Jahangir ;
Menouar, Hamid .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (03) :2165-2175
[5]   A UAV-Assisted Data Collection for Wireless Sensor Networks: Autonomous Navigation and Scheduling [J].
Bouhamed, Omar ;
Ghazzai, Hakim ;
Besbes, Hichem ;
Massoud, Yehia .
IEEE ACCESS, 2020, 8 :110446-110460
[6]   An Amateur Drone Surveillance System Based on the Cognitive Internet of Things [J].
Ding, Guoru ;
Wu, Qihui ;
Zhang, Linyuan ;
Lin, Yun ;
Tsiftsis, Theodoros A. ;
Yao, Yu-Dong .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (01) :29-35
[7]   Energy-Efficient UAV-Enabled Data Collection via Wireless Charging: A Reinforcement Learning Approach [J].
Fu, Shu ;
Tang, Yujie ;
Wu, Yuan ;
Zhang, Ning ;
Gu, Huaxi ;
Chen, Chen ;
Liu, Min .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :10209-10219
[8]  
Ghdiri O, 2020, P IEEE GLOB COMM C
[9]  
Kadri Abdullah, 2013, INT C COMMUNICATIONS
[10]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001