Computation Offloading via Multi-Agent Deep Reinforcement Learning in Aerial Hierarchical Edge Computing Systems

被引:1
|
作者
Wang, Yuanyuan [1 ]
Zhang, Chi [1 ]
Ge, Taiheng [2 ]
Pan, Miao [3 ]
机构
[1] Univ Sci & Technol China, Sch Cyber Sci & Technol, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
[3] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77204 USA
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2024年 / 11卷 / 06期
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Task analysis; Internet of Things; Autonomous aerial vehicles; Delays; Costs; Resource management; Disasters; Aerial computing; mobile edge computing; deep reinforcement learning; computation offloading; RESOURCE-ALLOCATION; NETWORKS; ARCHITECTURE; VISION; TASK; MEC;
D O I
10.1109/TNSE.2024.3391289
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The exponential growth of Internet of Things (IoT) devices and emerging applications have significantly increased the requirements for ubiquitous connectivity and efficient computing paradigms. Traditional terrestrial edge computing architectures cannot provide massive IoT connectivity worldwide. In this article, we propose an aerial hierarchical mobile edge computing system composed of high-altitude platforms (HAPs) and unmanned aerial vehicles (UAVs). In particular, we consider non-divisible tasks and formulate a task offloading problem to minimize the long-term processing cost of tasks while satisfying the queueing mechanism in the offloading procedure and processing procedure of tasks. We propose a multi-agent deep reinforcement learning (DRL) based computation offloading algorithm in which each device can make its offloading decision according to local observations. Due to the limited computing resources of UAVs, high task loads of UAVs will increase the ratio of abandoning offloaded tasks. To increase the success ratio of completing tasks, the convolutional LSTM (ConvLSTM) network is utilized to estimate the future task loads of UAVs. In addition, a prioritized experience replay (PER) method is proposed to increase the convergence speed and improve the training stability. The experimental results demonstrate that the proposed computation offloading algorithm outperforms other benchmark methods.
引用
收藏
页码:5253 / 5266
页数:14
相关论文
共 50 条
  • [11] Decentralized computation offloading via multi-agent deep reinforcement learning for NOMA-assisted mobile edge computing with energy harvesting devices
    Daghayeghi, Atousa
    Nickray, Mohsen
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 151
  • [12] Multi-Agent Deep Reinforcement Learning-Based Computation Offloading in LEO Satellite Edge Computing System
    Wu, Jian
    Jia, Min
    Zhang, Ningtao
    Guo, Qing
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (10) : 2352 - 2356
  • [13] Collaborative Task Offloading Optimization for Satellite Mobile Edge Computing Using Multi-Agent Deep Reinforcement Learning
    Zhang, Hangyu
    Zhao, Hongbo
    Liu, Rongke
    Kaushik, Aryan
    Gao, Xiangqiang
    Xu, Shenzhan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (10) : 15483 - 15498
  • [14] Joint Computation Offloading and Resource Allocation in Multi-Edge Smart Communities With Personalized Federated Deep Reinforcement Learning
    Chen, Zheyi
    Xiong, Bing
    Chen, Xing
    Min, Geyong
    Li, Jie
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 11604 - 11619
  • [15] NOMA-Based Multi-User Mobile Edge Computation Offloading via Cooperative Multi-Agent Deep Reinforcement Learning
    Chen, Zhao
    Zhang, Lei
    Pei, Yukui
    Jiang, Chunxiao
    Yin, Liuguo
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (01) : 350 - 364
  • [16] Multi-agent deep reinforcement learning for collaborative task offloading in mobile edge computing networks
    Chen, Minxuan
    Guo, Aihuang
    Song, Chunlin
    DIGITAL SIGNAL PROCESSING, 2023, 140
  • [17] Cooperative Task Offloading for Mobile Edge Computing Based on Multi-Agent Deep Reinforcement Learning
    Yang, Jian
    Yuan, Qifeng
    Chen, Shuangwu
    He, Huasen
    Jiang, Xiaofeng
    Tan, Xiaobin
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (03): : 3205 - 3219
  • [18] Joint Service Caching and Computation Offloading Scheme Based on Deep Reinforcement Learning in Vehicular Edge Computing Systems
    Xue, Zheng
    Liu, Chang
    Liao, Canliang
    Han, Guojun
    Sheng, Zhengguo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 6709 - 6722
  • [19] Computing Over the Sky: Joint UAV Trajectory and Task Offloading Scheme Based on Optimization-Embedding Multi-Agent Deep Reinforcement Learning
    Li, Xuanheng
    Du, Xinyang
    Zhao, Nan
    Wang, Xianbin
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (03) : 1355 - 1369
  • [20] Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
    Chen, Xianfu
    Zhang, Honggang
    Wu, Celimuge
    Mao, Shiwen
    Ji, Yusheng
    Bennis, Mehdi
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03): : 4005 - 4018