Computing Over the Sky: Joint UAV Trajectory and Task Offloading Scheme Based on Optimization-Embedding Multi-Agent Deep Reinforcement Learning

被引:8
作者
Li, Xuanheng [1 ]
Du, Xinyang [1 ]
Zhao, Nan [1 ]
Wang, Xianbin [2 ]
机构
[1] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
[2] Western Univ, Dept Elect & Comp Engn, London, ON N6A 5B9, Canada
基金
中国国家自然科学基金;
关键词
Autonomous aerial vehicles; Task analysis; Trajectory; Heuristic algorithms; Delays; Reinforcement learning; Resource management; Unmanned aerial vehicle; mobile edge computing; computation offloading; trajectory control; reinforcement learning; RESOURCE-ALLOCATION; ENERGY EFFICIENCY; ALGORITHM; NETWORKS; TIME;
D O I
10.1109/TCOMM.2023.3331029
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) has emerged to support computation-intensive tasks in 6G systems. Since the battery capacity of a UAV is limited, to serve as many users as possible, a joint design on UAV trajectory and offloading strategy with consideration for service fairness is essential to provide energy-efficient computation offloading to the users in UAV-MEC networks. Unfortunately, such a joint decision-making problem is not straightforward due to various task types required from users and various functionalities of different UAVs enabled by different application programs. Considering the above issues, we take energy efficiency and service fairness as the objective, and propose a Multi-Agent Energy-Efficient joint Trajectory and Computation Offloading (MA-ETCO) scheme. To adapt to dynamic demands of users, we develop an optimization-embedding multi-agent deep reinforcement learning (OMADRL) algorithm. Each UAV autonomously learns the trajectory control decision based on MADRL to adapt to dynamic demands. Then, it will obtain the optimal computation offloading decision by solving a mixed-integer nonlinear programming problem. The computation offloading result, in turn, will be used as an indicator to guide UAVs' trajectory design. Compared to relying solely on deep reinforcement learning, such an optimization-embedding way reduces action space dimension and improves convergence efficiency.
引用
收藏
页码:1355 / 1369
页数:15
相关论文
共 55 条
[1]   Optimal LAP Altitude for Maximum Coverage [J].
Al-Hourani, Akram ;
Kandeepan, Sithamparanathan ;
Lardner, Simon .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2014, 3 (06) :569-572
[2]  
[Anonymous], 2019, IEEE Internet Things J., V6, P4627
[3]  
[Anonymous], 2021, IEEE J. Sel. Areas Commun., V39, P131
[4]  
[Anonymous], 2022, IEEEInternet Things J, V9, P12737
[5]  
Cao S., 2020, P IEEE INT C COMM JU, P1
[6]   Joint Power Allocation and Placement Scheme for UAV-Assisted IoT With QoS Guarantee [J].
Chen, Ruirui ;
Sun, Yanjing ;
Liang, Liping ;
Cheng, Wenchi .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (01) :1066-1071
[7]   A Joint Trajectory and Computation Offloading Scheme for UAV-MEC Networks via Multi-Agent Deep Reinforcement Learning [J].
Du, Xinyang ;
Li, Xuanheng ;
Zhao, Nan ;
Wang, Xianbin .
ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, :5438-5443
[8]  
Goldsmith A., 2005, Wireless Communications
[9]   Joint Service Placement and Resource Allocation for Multi-UAV Collaborative Edge Computing [J].
He, Xiaofan ;
Jin, Richeng ;
Dai, Huaiyu .
2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
[10]   Energy Efficiency and Delay Tradeoff in an MEC-Enabled Mobile IoT Network [J].
Hu, Han ;
Song, Weiwei ;
Wang, Qun ;
Hu, Rose Qingyang ;
Zhu, Hongbo .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) :15942-15956