共 34 条
Task Offloading Strategy for UAV-Assisted Mobile Edge Computing with Covert Transmission
被引:2
作者:
Hu, Zhijuan
[1
]
Zhou, Dongsheng
[1
]
Shen, Chao
[1
]
Wang, Tingting
[2
]
Liu, Liqiang
[1
]
机构:
[1] Xian Technol Univ, Sch Comp Sci & Engn, Xian 710021, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
来源:
基金:
中国国家自然科学基金;
关键词:
mobile edge computing;
cover communication;
unmanned aerial vehicle;
deep deterministic policy gradient;
prioritized experience replay;
RESOURCE-ALLOCATION;
OPTIMIZATION;
POWER;
D O I:
10.3390/electronics14030446
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
Task offloading strategies for unmanned aerial vehicle (UAV) -assisted mobile edge computing (MEC) systems have emerged as a promising solution for computationally intensive applications. However, the broadcast and open nature of radio transmissions makes such systems vulnerable to eavesdropping threats. Therefore, developing strategies that can perform task offloading in a secure communication environment is critical for both ensuring the security and optimizing the performance of MEC systems. In this paper, we first design an architecture that utilizes covert communication techniques to guarantee that a UAV-assisted MEC system can securely offload highly confidential tasks from the relevant user equipment (UE) and calculations. Then, utilizing the Markov Decision Process (MDP) as a framework and incorporating the Prioritized Experience Replay (PER) mechanism into the Deep Deterministic Policy Gradient (DDPG) algorithm, a PER-DDPG algorithm is proposed, aiming to minimize the maximum processing delay of the system and the correct detection rate of the warden by jointly optimizing resource allocation, the movement of the UAV base station (UAV-BS), and the transmit power of the jammer. Simulation results demonstrate the convergence and effectiveness of the proposed approach. Compared to baseline algorithms such as Deep Q-Network (DQN) and DDPG, the PER-DDPG algorithm achieves significant performance improvements, with an average reward increase of over 16% compared to DDPG and over 53% compared to DQN. Furthermore, PER-DDPG exhibits the fastest convergence speed among the three algorithms, highlighting its efficiency in optimizing task offloading and communication security.
引用
收藏
页数:21
相关论文