Deep reinforcement learning based computation offloading for xURLLC services with UAV-assisted IoT-based multi-access edge computing system

被引:3
作者
Fatima, Nida [1 ]
Saxena, Paresh [1 ]
Giambene, Giovanni [2 ]
机构
[1] BITS Pilani, Dept Comp Sci & Informat Syst, Hyderabad Campus, Hyderabad 500078, India
[2] Univ Siena, Dept Informat Engn & Math Sci, I-53100 Siena, Italy
关键词
Deep reinforcement learning; Computation offloading; Internet of Things; Multi-access edge computing; Unmanned aerial vehicles; Next-generation ultra-reliable and low-latency communications; RESOURCE-ALLOCATION;
D O I
10.1007/s11276-023-03596-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
New Internet of Things (IoT) based applications with stricter key performance indicators (KPI) such as round-trip delay, network availability, energy efficiency, spectral efficiency, security, age of information, throughput, and jitter present unprecedented challenges in achieving next-generation ultra-reliable and low-latency communications (xURLLC) for sixth-generation (6 G) communication systems and beyond. In this paper, we aim to collaboratively utilize technologies such as deep reinforcement learning (DRL), unmanned aerial vehicle (UAV), and multi-access edge computing (MEC) to meet the aforementioned KPIs and support the xURLLC services. We present a DRL-empowered UAV-assisted IoT-based MEC system in which a UAV carries a MEC server and provides computation services to IoT devices. Specifically, we have employed twin delay deep deterministic policy gradient (TD3), a DRL algorithm, to find optimal computation offloading policies while simultaneously minimizing both the processing delay and the energy consumption of IoT devices, which inherently influence the KPI requirements. Numerical results illustrate the effectiveness of the proposed approach that can significantly reduce the processing delay and energy consumption, and converge quickly, outperforming the other state-of-the-art DRL-based computation offloading algorithms including Double Deep Q-Network(DDQN) and Deep Deterministic Policy Gradient (DDPG).
引用
收藏
页码:7275 / 7291
页数:17
相关论文
共 50 条
[41]   Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks [J].
Peng, Haixia ;
Shen, Xuemin .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (04) :2416-2428
[42]   UAV-Assisted Multi-Access Edge Computing With Altitude-Dependent Computing Power [J].
Deng, Yiqin ;
Zhang, Haixia ;
Chen, Xianhao ;
Fang, Yuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (08) :9404-9418
[43]   Deep Reinforcement Learning for Task Offloading in Edge Computing Assisted Power IoT [J].
Hu, Jiangyi ;
Li, Yang ;
Zhao, Gaofeng ;
Xu, Bo ;
Ni, Yiyang ;
Zhao, Haitao .
IEEE ACCESS, 2021, 9 :93892-93901
[44]   Deadline-Aware Task Offloading With Partially-Observable Deep Reinforcement Learning for Multi-Access Edge Computing [J].
Huang, Hui ;
Ye, Qiang ;
Zhou, Yitong .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (06) :3870-3885
[45]   Dynamic Task Offloading and Scheduling for Low-Latency IoT Services in Multi-Access Edge Computing [J].
Alameddine, Hyame Assem ;
Sharafeddine, Sanaa ;
Sebbah, Samir ;
Ayoubi, Sara ;
Assi, Chadi .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (03) :668-682
[46]   Enhancing QoS for IoT Devices through Heuristics-based Computation Offloading in Multi-access Edge Computing [J].
Myyara, Marouane ;
Lagnfdi, Oussama ;
Darif, Anouar ;
Farchane, Abderrazak .
INFOCOMMUNICATIONS JOURNAL, 2024, 16 (04) :10-17
[47]   Qc - DQN: A Novel Constrained Reinforcement Learning Method for Computation Offloading in Multi-access Edge Computing [J].
Zhuang, Shen ;
Gao, Chengxi ;
He, Ying ;
Yu, F. Richard ;
Wang, Yuhang ;
Pan, Weike ;
Ming, Zhong .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[48]   Computation Offloading in Multi-Access Edge Computing Networks: A Multi-Task Learning Approach [J].
Yang, Bo ;
Cao, Xuelin ;
Bassey, Joshua ;
Li, Xiangfang ;
Kroecker, Timothy ;
Qian, Lijun .
ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
[49]   Multi-UAV-assisted computation offloading in DT-based networks: A distributed deep reinforcement learning approach [J].
Shi, Junling ;
Li, Chunyu ;
Guan, Yunchong ;
Cong, Peiyu ;
Li, Jie .
COMPUTER COMMUNICATIONS, 2023, 210 :217-228
[50]   Energy-Latency Tradeoff for Computation Offloading in UAV-Assisted Multiaccess Edge Computing System [J].
Zhang, Kaiyuan ;
Gui, Xiaolin ;
Ren, Dewang ;
Li, Defu .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (08) :6709-6719