Deep Reinforcement Learning for Rechargeable AAV-Assisted Data Collection From Dense Mobile Sensor Nodes

被引:0
作者
Bai, Shanshan [1 ]
Wang, Xueyuan [1 ]
Gursoy, M. Cenk [2 ]
Jiang, Guangqi [1 ]
Xu, Shoukun [1 ]
机构
[1] Changzhou Univ, Sch Comp Sci & Artificial Intelligence, Changzhou 213164, Peoples R China
[2] Syracuse Univ, Dept Elect Engn & Comp Sci, Syracuse 13244, NY USA
基金
中国国家自然科学基金;
关键词
Autonomous aerial vehicles; Deep reinforcement learning; Trajectory; Internet of Things; Batteries; Vehicle dynamics; Optimization; Energy efficiency; Energy consumption; rechargeable AAV; data collection; mobile sensor nodes; UAV;
D O I
10.1109/ACCESS.2025.3539888
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the realm of the Internet of Things (IoT), Autonomous Aerial Vehicles (AAVs) have garnered significant attention due to their high mobility and cost-effectiveness. However, the limited onboard energy, kinematic constraints, and highly dynamic environments present significant challenges for AAVs in the context of continuous real-time data collection scenarios. To address this issue, we investigate the utilization of a rechargeable AAV for data collection tasks in scenarios with densely mobile sensor nodes. This study formulates the problem as a Markov decision process and designs a reinforcement learning approach called guided search twin-dueling-double deep Q-Network (GS-TD3QN). Within this framework, the goal is to optimize the flight path, charging strategy, and data upload intervals to collectively maximize the total number of uploaded data packets, improve energy efficiency, and minimize the average age of information. Additionally, we propose an action filter to mitigate collision risks and explore various scheduling strategies. Ultimately, by evaluating the performance with simulation results, we confirm the effectiveness of the proposed algorithm and validate its applicability across varying quantities of nodes.
引用
收藏
页码:28398 / 28407
页数:10
相关论文
共 33 条
[1]   Joint Optimization of Trajectory and User Association via Reinforcement Learning for UAV-Aided Data Collection in Wireless Networks [J].
Chen, Gong ;
Zhai, Xiangping Bryce ;
Li, Congduan .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (05) :3128-3143
[2]  
Chen J., 2018, PROC IEEE 88 VEH TEC, P1
[3]   Energy-aware Coverage Path Planning of UAVs [J].
Di Franco, Carmelo ;
Buttazzo, Giorgio .
2015 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2015, :111-117
[4]   TIME-DIVISION MULTIPLE-ACCESS METHODS FOR WIRELESS PERSONAL COMMUNICATIONS [J].
FALCONER, DD ;
ADACHI, F ;
GUDMUNDSON, B .
IEEE COMMUNICATIONS MAGAZINE, 1995, 33 (01) :50-57
[5]   Energy-Efficient UAV-Enabled Data Collection via Wireless Charging: A Reinforcement Learning Approach [J].
Fu, Shu ;
Tang, Yujie ;
Wu, Yuan ;
Zhang, Ning ;
Gu, Huaxi ;
Chen, Chen ;
Liu, Min .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :10209-10219
[6]   Survey of Important Issues in UAV Communication Networks [J].
Gupta, Lav ;
Jain, Raj ;
Vaszkun, Gabor .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2016, 18 (02) :1123-1152
[7]   Joint Optimization of Trajectory and Node Access in UAV-Aided Data Collection System [J].
Han, Dongsheng ;
Shi, Tianhao ;
Han, Tianyu ;
Zhou, Zhenyu .
IEEE SYSTEMS JOURNAL, 2023, 17 (02) :2574-2585
[8]   Timely Data Collection for UAV-Based IoT Networks: A Deep Reinforcement Learning Approach [J].
Hu, Yingmeng ;
Liu, Yan ;
Kaushik, Aryan ;
Masouros, Christos ;
Thompson, John S. .
IEEE SENSORS JOURNAL, 2023, 23 (11) :12295-12308
[9]   Reinforcement-Learning-Aided Safe Planning for Aerial Robots to Collect Data in Dynamic Environments [J].
Khamidehi, Behzad ;
Sousa, Elvino S. .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) :13901-13912
[10]   Wirelessly Powered Federated Learning Networks: Joint Power Transfer, Data Sensing, Model Training, and Resource Allocation [J].
Le, Mai ;
Thai Hoang, Dinh ;
Nguyen, Diep N. ;
Pham, Quoc-Viet ;
Hwang, Won-Joo .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21) :34093-34107