Multiobjective Deep Reinforcement Learning for Computation Offloading and Trajectory Control in UAV-Base-Station-Assisted MEC

被引:2
作者
Huang, Hao [1 ]
Chai, Zheng-Yi [1 ]
Sun, Bao-Shan [1 ]
Kang, Hong-Shen [1 ]
Zhao, Ying-Jie [1 ]
机构
[1] Tiangong Univ, Sch Comp Sci, Tianjin Key Lab Autonomous Intelligence Technol &, Tianjin 300387, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 19期
基金
中国国家自然科学基金;
关键词
Autonomous aerial vehicles; Task analysis; Delays; Energy consumption; Real-time systems; Trajectory; Servers; Computation offloading; multiaccess edge computing (MEC); multiobjective reinforcement learning; trajectory control; unmanned aerial vehicle (UAV); RESOURCE-ALLOCATION;
D O I
10.1109/JIOT.2024.3420884
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned aerial vehicle (UAV) and base station jointly assisted multiaccess edge computing (UB-MEC) technology is a promising direction to provide flexible computing services for resource-limited devices. Due to the non-real-time observation of device loads and the dynamic nature of demand in UB-MEC, it is a highly challenging problem to make UAV respond in real time to meet user's dynamic preferences in UB-MEC. To this end, we propose a multiobjective deep reinforcement learning (MODRL) for computation offloading and trajectory control (COTC) of UAV. First, the problem is formulated as a multiobjective Markov decision process (MOMDP), where the traditional scalar rewards are extended to vector, corresponding to the number of task data collected, the completion delay, and the UAV's energy consumption, and the weights are dynamically adjusted to meet different user preferences. Then, considering the device load information stored in UAV is non-real-time, an attentional long short-term memory (ALSTM) network is designed to predict real-time states by autofocusing important historical information. The near on-policy experience replay (NOER) reviews experiences close to on-policy can better promote learning of current strategy. The simulation results show that the proposed algorithm can obtain the action policy which meets the user's time-varying preferences, and can achieve a good balance between different objectives under different preferences.
引用
收藏
页码:31805 / 31821
页数:17
相关论文
共 44 条
[1]  
Abels A, 2019, Arxiv, DOI arXiv:1809.07803
[2]   Dynamic Offloading Strategy for Delay-Sensitive Task in Mobile-Edge Computing Networks [J].
Ai, Lihua ;
Tan, Bin ;
Zhang, Jiadi ;
Wang, Rui ;
Wu, Jun .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (01) :526-538
[3]   Computing in the Sky: A Survey on Intelligent Ubiquitous Computing for UAV-Assisted 6G Networks and Industry 4.0/5.0 [J].
Alsamhi, Saeed Hamood ;
Shvetsov, Alexey, V ;
Kumar, Santosh ;
Hassan, Jahan ;
Alhartomi, Mohammed A. ;
Shvetsova, Svetlana, V ;
Sahal, Radhya ;
Hawbani, Ammar .
DRONES, 2022, 6 (07)
[4]  
Burda Y, 2018, Arxiv, DOI arXiv:1808.04355
[5]   A DRL Agent for Jointly Optimizing Computation Offloading and Resource Allocation in MEC [J].
Chen, Juan ;
Xing, Huanlai ;
Xiao, Zhiwen ;
Xu, Lexi ;
Tao, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (24) :17508-17524
[6]   Task Co-Offloading for D2D-Assisted Mobile Edge Computing in Industrial Internet of Things [J].
Dai, Xingxia ;
Xiao, Zhu ;
Jiang, Hongbo ;
Alazab, Mamoun ;
Lui, John C. S. ;
Dustdar, Schahram ;
Liu, Jiangchuan .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (01) :480-490
[7]   Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model [J].
Ding, Feng ;
Xu, Ling ;
Meng, Dandan ;
Jin, Xue-Bo ;
Alsaedi, Ahmed ;
Hayat, Tasawar .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2020, 369
[8]   A Joint Trajectory and Computation Offloading Scheme for UAV-MEC Networks via Multi-Agent Deep Reinforcement Learning [J].
Du, Xinyang ;
Li, Xuanheng ;
Zhao, Nan ;
Wang, Xianbin .
ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, :5438-5443
[9]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[10]   An Offloading Optimization Scheme for Multi-UAV Aided Network in Mobile Computing [J].
Gao, Ang ;
Wang, Qi ;
Hu, Yansu ;
Duan, Weijun .
2020 16TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC, 2020, :1468-1473