Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks

被引:0
作者
Ali, Abubakar S. [1 ]
Al-Habob, Ahmed A. [2 ]
Naser, Shimaa [1 ]
Bariah, Lina [3 ]
Dobre, Octavia A. [2 ]
Muhaidat, Sami [1 ,4 ]
机构
[1] Khalifa Univ, KU 6G Res Ctr, Dept Comp & Informat Engn, Abu Dhabi, U Arab Emirates
[2] Mem Univ, Dept Elect & Comp Engn, St John, NF A1C 5S7, Canada
[3] Technol Innovat Inst, Abu Dhabi, U Arab Emirates
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2024年 / 5卷
关键词
Autonomous aerial vehicles; Internet of Things; Data dissemination; Optimization; Energy consumption; Heuristic algorithms; Energy efficiency; deep learning; Internet-of-Things (IoT); reinforcement learning (RL); unmanned aerial vehicle (UAV); SENSOR NETWORKS; MANAGEMENT; INTERNET;
D O I
10.1109/OJCOMS.2024.3398718
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The rise of the Internet of Things (IoT), marked by unprecedented growth in connected devices, has created an insatiable demand for supplementary computational and communication resources. The integration of Unmanned aerial vehicles (UAVs) within IoT ecosystems presents a promising avenue to surmount these obstacles, offering enhanced network coverage, agile deployment capabilities, and efficient data gathering from geographically challenging locales. UAVs have been recognized as a compelling solution, offering extended coverage, flexibility, and reachability for IoT networks. Despite these benefits, UAV technology faces significant challenges, including limited energy resources, the necessity for adaptive responses to dynamic environments, and the imperative for autonomous operation to fulfill the evolving demands of IoT networks. In light of this, we introduce an innovative UAV-assisted data dissemination framework that aims to minimize the total energy expenditure, considering both the UAV and all spatially-distributed IoT devices. Our framework addresses three interconnected subproblems: device classification, device association, and path planning. For device classification, we employ two distinct types of deep reinforcement learning (DRL) agents-Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)-to classify devices into two tiers. To tackle device association, we propose an approach based on the nearest-neighbor heuristic to associate Tier 2 devices with a Tier 1 device. For path planning, we propose an approach that utilizes the Lin-Kernighan heuristic to plan the UAV's path among the Tier 1 devices. We compare our method with three baseline approaches and demonstrate through simulation results that our approach significantly reduces energy consumption and offers a near-optimal solution in a fraction of the time required by brute force methods and ant colony heuristics. Consequently, our framework presents an efficient and practical alternative for energy-efficient data dissemination in UAV-assisted IoT networks.
引用
收藏
页码:5567 / 5583
页数:17
相关论文
共 30 条
[1]  
Al-Habob A. A., 2021, IEEE Wireless Commun. Lett., V10, P6732
[2]   Data Dissemination in IoT Using a Cognitive UAV [J].
Almasoud, Abdullah M. ;
Kamal, Ahmed E. .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) :849-862
[3]   Joint Position and Travel Path Optimization for Energy Efficient Wireless Data Gathering Using Unmanned Aerial Vehicles [J].
Ben Ghorbel, Mahdi ;
Rodriguez-Duarte, David ;
Ghazzai, Hakim ;
Hossain, Md. Jahangir ;
Menouar, Hamid .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (03) :2165-2175
[4]  
Bottou L., 2008, Advances in neural information processing systems, P161
[5]   UAV Trajectory Optimization for Data Offloading at the Edge of Multiple Cells [J].
Cheng, Fen ;
Zhang, Shun ;
Li, Zan ;
Chen, Yunfei ;
Zhao, Nan ;
Yu, F. Richard ;
Leung, Victor C. M. .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (07) :6732-6736
[6]   Sensor networks: Evolution, opportunities, and challenges [J].
Chong, CY ;
Kumar, SP .
PROCEEDINGS OF THE IEEE, 2003, 91 (08) :1247-1256
[7]   Energy-Efficient Management of Unmanned Aerial Vehicles for Underlay Cognitive Radio Systems [J].
Ghazzai, Hakim ;
Ben Ghorbel, Mahdi ;
Kadri, Abdullah ;
Hossain, Md Jahangir ;
Menouar, Hamid .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2017, 1 (04) :434-443
[8]  
Kizilates G., 2013, Advances in Computational Science, Engineering and Information Technology, P111
[9]   Deep Reinforcement Learning for Energy-Efficient Networking with Reconfigurable Intelligent Surfaces [J].
Lee, Gilsoo ;
Jung, Minchae ;
Kasgari, Ali Taleb Zadeh ;
Saad, Walid ;
Bennis, Mehdi .
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
[10]   Joint Flight Cruise Control and Data Collection in UAV-Aided Internet of Things: An Onboard Deep Reinforcement Learning Approach [J].
Li, Kai ;
Ni, Wei ;
Tovar, Eduardo ;
Guizani, Mohsen .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :9787-9799