A Federated Reinforcement Learning Approach for Optimizing Wireless Communication in UAV-Enabled IoT Network With Dense Deployments

被引:19
作者
Yang, Fan [1 ]
Zhao, Zijie [1 ]
Huang, Jie [1 ]
Liu, Peifeng [1 ]
Tolba, Amr [2 ]
Yu, Keping [3 ]
Guizani, Mohsen [4 ]
机构
[1] Chongqing Univ Technol, Sch Elect & Elect Engn, Chongqing 400054, Peoples R China
[2] King Saud Univ, Community Coll, Comp Sci Dept, Riyadh 11437, Saudi Arabia
[3] Hosei Univ, Grad Sch Sci & Engn, Tokyo 1848584, Japan
[4] Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
基金
中国国家自然科学基金;
关键词
Resource management; Internet of Things; Interference; Throughput; Device-to-device communication; Data models; Autonomous aerial vehicles; Federated reinforcement learning (FRL); hypergraph; resource allocation; unmanned aerial vehicle (UAV)-enabled Internet of Things (IoT); RESOURCE-ALLOCATION; MANAGEMENT; SCHEME;
D O I
10.1109/JIOT.2024.3434713
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In unmanned aerial vehicle (UAV)-enabled Internet of Things (IoT) networks, the communication ranges between densely deployed IoT devices overlap, resulting in wireless resource conflicts between them. Hence, achieving conflict-free resource allocation is a challenging issue that must be urgently addressed for UAV-enabled IoT networks. To tackle this issue, a hypergraph is used to quantify conflicts, and a federated reinforcement learning (RL)-based resource allocation framework is proposed. Specifically, a conflict graph model is developed for UAV-enabled IoT networks with dense deployments. The model is then converted into a conflict hypergraph model using hypergraph and faction theory. Consequently, the conflict avoidance problem of resource allocation can be reformulated as a hypergraph node coloring problem. The problem is formulated as a Markov decision process, which is solved using a deep RL-based approach. Additionally, to distribute the computational workload across the network and alleviate the burden on the central server, we propose the FedAvg dueling double deep Q-network (FedAvg-D3QN). The proposed FedAvg-D3QN is verified through simulation to have advantages in resource reuse rate and throughput compared to baseline approaches.
引用
收藏
页码:33953 / 33966
页数:14
相关论文
共 37 条
[1]   Experience Replay for Real-Time Reinforcement Learning Control [J].
Adam, Sander ;
Busoniu, Lucian ;
Babuska, Robert .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (02) :201-212
[2]   An Autonomous Transmission Scheme Using Dueling DQN for D2D Communication Networks [J].
Ban, Tae-Won .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) :16348-16352
[3]   Energy-Efficient Over-the-Air Computation Scheme for Densely Deployed IoT Networks [J].
Basaran, Semiha Tedik ;
Kurt, Gunes Karabulut ;
Chatzimisios, Periklis .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (05) :3558-3565
[4]   Deep Reinforcement Learning for Channel and Power Allocation in UAV-enabled IoT Systems [J].
Cao, Yang ;
Zhang, Lin ;
Liang, Ying-Chang .
2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
[5]   Maximum Signed θ-Clique Identification in Large Signed Graphs [J].
Chen, Chen ;
Wu, Yanping ;
Sun, Renjie ;
Wang, Xiaoyang .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) :1791-1802
[6]   UAV-Assisted Data Collection With Nonorthogonal Multiple Access [J].
Chen, Weichao ;
Zhao, Shengjie ;
Zhang, Rongqing ;
Chen, Yi ;
Yang, Liuqing .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (01) :501-511
[7]   Multi-Agent Reinforcement Learning-Based Resource Allocation for UAV Networks [J].
Cui, Jingjing ;
Liu, Yuanwei ;
Nallanathan, Arumugam .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (02) :729-743
[8]   Seamless and Energy-Efficient Maritime Coverage in Coordinated 6G Space-Air-Sea Non-Terrestrial Networks [J].
Hassan, Sheikh Salman ;
Kim, Do Hyeon ;
Tun, Yan Kyaw ;
Tran, Nguyen H. H. ;
Saad, Walid ;
Hong, Choong Seon .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (06) :4749-4769
[9]   Green Resource Allocation Based on Deep Reinforcement Learning in Content-Centric IoT [J].
He, Xiaoming ;
Wang, Kun ;
Huang, Huawei ;
Miyazaki, Toshiaki ;
Wang, Yixuan ;
Guo, Song .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2020, 8 (03) :781-796
[10]   Federated Deep Reinforcement Learning-Based Intelligent Dynamic Services in UAV-Assisted MEC [J].
Hou, Peng ;
Jiang, Xiaohan ;
Wang, Zongshan ;
Liu, Sen ;
Lu, Zhihui .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (23) :20415-20428