Task Offloading With Service Migration for Satellite Edge Computing: A Deep Reinforcement Learning Approach

被引:3
作者
Wu, Haonan [1 ]
Yang, Xiumei [1 ]
Bu, Zhiyong [1 ,2 ]
机构
[1] Chinese Acad Sci, Shanghai Inst Microsyst & Informat Technol, Shanghai 200050, Peoples R China
[2] Chinese Acad Sci, Key Lab Wireless Sensor Network & Commun, Shanghai 200050, Peoples R China
关键词
Satellites; Task analysis; Low earth orbit satellites; Delays; Servers; Internet of Things; Low latency communication; Edge computing; Deep reinforcement learning; Satellite edge computing (SEC); task offloading; service migration; deep reinforcement learning (DRL); TERRESTRIAL NETWORKS; MOBILITY-AWARE; ALLOCATION; PLACEMENT; ACCESS; 5G;
D O I
10.1109/ACCESS.2024.3367128
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Satellite networks with edge computing servers promise to provide ubiquitous and low-latency computing services for the Internet of Things (IoT) applications in the future satellite-terrestrial integrated network (STIN). For some emerging IoT applications, the services require real-time user-dependent state information, such as time-varying task states and user-specific configurations, to maintain service continuity. Service migration is crucial for dynamic task offloading to synchronize the user-dependent state information between computing servers. However, how to offload computing tasks at low latency with the impact of service migration remains challenging due to the high-speed movement and load imbalance of low Earth orbit (LEO) satellite networks. In this work, we investigate the task offloading problem with service migration for satellite edge computing (SEC) using inter-satellite cooperation. Facing dynamic service requirements with limited on-board bandwidth, energy, and storage resources of satellite networks, we formulate the problem with the aim of minimizing the service delay to optimize the offloading path selection. By leveraging a deep reinforcement learning (DRL) approach, we propose a distributed scheme based on the Dueling-Double-Deep-Q-Learning (D3QN) algorithm. Simulation results show that the proposed scheme can effectively reduce the service delay, and outperform the benchmark algorithms.
引用
收藏
页码:25844 / 25856
页数:13
相关论文
共 50 条
[21]   Dynamic task offloading for digital twin-empowered mobile edge computing via deep reinforcement learning [J].
Chen, Ying ;
Gu, Wei ;
Xu, Jiajie ;
Zhang, Yongchao ;
Min, Geyong .
CHINA COMMUNICATIONS, 2023, 20 (11) :164-175
[22]   A bandwidth-fair migration-enabled task offloading for vehicular edge computing: a deep reinforcement learning approach [J].
Tang, Chaogang ;
Li, Zhao ;
Xiao, Shuo ;
Wu, Huaming ;
Chen, Wei .
CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2024, 6 (03) :255-270
[23]   Parameterized Deep Reinforcement Learning With Hybrid Action Space for Edge Task Offloading [J].
Wang, Ting ;
Deng, Yuxiang ;
Yang, Zhao ;
Wang, Yang ;
Cai, Haibin .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06) :10754-10767
[24]   Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing [J].
Zhan, Wenhan ;
Luo, Chunbo ;
Wang, Jin ;
Wang, Chao ;
Min, Geyong ;
Duan, Hancong ;
Zhu, Qingxin .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06) :5449-5465
[25]   Deep Reinforcement Learning-based Task Offloading in Satellite-Terrestrial Edge Computing Networks [J].
Zhu, Dali ;
Liu, Haitao ;
Li, Ting ;
Sun, Jiyan ;
Liang, Jie ;
Zhang, Hangsheng ;
Geng, Liru ;
Liu, Yudong .
2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
[26]   Joint Service Caching and Computation Offloading Scheme Based on Deep Reinforcement Learning in Vehicular Edge Computing Systems [J].
Xue, Zheng ;
Liu, Chang ;
Liao, Canliang ;
Han, Guojun ;
Sheng, Zhengguo .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) :6709-6722
[27]   Privacy-preserving task offloading in mobile edge computing: A deep reinforcement learning approach [J].
Xia, Fanglue ;
Chen, Ying ;
Huang, Jiwei .
SOFTWARE-PRACTICE & EXPERIENCE, 2024, 54 (09) :1774-1792
[28]   Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning [J].
Uddin, Ashab ;
Sakr, Ahmed Hamdi ;
Zhang, Ning .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) :5038-5052
[29]   Deep reinforcement learning-based dynamical task offloading for mobile edge computing [J].
Xie, Bo ;
Cui, Haixia .
JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
[30]   Task offloading method of edge computing in internet of vehicles based on deep reinforcement learning [J].
Zhang, Degan ;
Cao, Lixiang ;
Zhu, Haoli ;
Zhang, Ting ;
Du, Jinyu ;
Jiang, Kaiwen .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2022, 25 (02) :1175-1187