Computation Offloading via Multi-Agent Deep Reinforcement Learning in Aerial Hierarchical Edge Computing Systems

被引:1
|
作者
Wang, Yuanyuan [1 ]
Zhang, Chi [1 ]
Ge, Taiheng [2 ]
Pan, Miao [3 ]
机构
[1] Univ Sci & Technol China, Sch Cyber Sci & Technol, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
[3] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77204 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Task analysis; Internet of Things; Autonomous aerial vehicles; Delays; Costs; Resource management; Disasters; Aerial computing; mobile edge computing; deep reinforcement learning; computation offloading; RESOURCE-ALLOCATION; NETWORKS; ARCHITECTURE; VISION; TASK; MEC;
D O I
10.1109/TNSE.2024.3391289
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The exponential growth of Internet of Things (IoT) devices and emerging applications have significantly increased the requirements for ubiquitous connectivity and efficient computing paradigms. Traditional terrestrial edge computing architectures cannot provide massive IoT connectivity worldwide. In this article, we propose an aerial hierarchical mobile edge computing system composed of high-altitude platforms (HAPs) and unmanned aerial vehicles (UAVs). In particular, we consider non-divisible tasks and formulate a task offloading problem to minimize the long-term processing cost of tasks while satisfying the queueing mechanism in the offloading procedure and processing procedure of tasks. We propose a multi-agent deep reinforcement learning (DRL) based computation offloading algorithm in which each device can make its offloading decision according to local observations. Due to the limited computing resources of UAVs, high task loads of UAVs will increase the ratio of abandoning offloaded tasks. To increase the success ratio of completing tasks, the convolutional LSTM (ConvLSTM) network is utilized to estimate the future task loads of UAVs. In addition, a prioritized experience replay (PER) method is proposed to increase the convergence speed and improve the training stability. The experimental results demonstrate that the proposed computation offloading algorithm outperforms other benchmark methods.
引用
收藏
页码:5253 / 5266
页数:14
相关论文
共 50 条
  • [11] Multi-Agent Deep Reinforcement Learning based Collaborative Computation Offloading in Vehicular Edge Networks
    Wang, Hao
    Zhou, Huan
    Zhao, Liang
    Liu, Xuxun
    Leung, Victor C. M.
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS, ICDCSW, 2023, : 151 - 156
  • [12] Hierarchical Task Offloading for Vehicular Fog Computing Based on Multi-Agent Deep Reinforcement Learning
    Hou, Yukai
    Wei, Zhiwei
    Zhang, Rongqing
    Cheng, Xiang
    Yang, Liuqing
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (04) : 3074 - 3085
  • [13] Cooperative Task Offloading for Mobile Edge Computing Based on Multi-Agent Deep Reinforcement Learning
    Yang, Jian
    Yuan, Qifeng
    Chen, Shuangwu
    He, Huasen
    Jiang, Xiaofeng
    Tan, Xiaobin
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (03): : 3205 - 3219
  • [14] Multi-agent deep reinforcement learning for collaborative task offloading in mobile edge computing networks
    Chen, Minxuan
    Guo, Aihuang
    Song, Chunlin
    DIGITAL SIGNAL PROCESSING, 2023, 140
  • [15] Vehicle Edge Computing Task Offloading Strategy Based on Multi-Agent Deep Reinforcement Learning
    Bo, Jianxiong
    Zhao, Xu
    JOURNAL OF GRID COMPUTING, 2025, 23 (02)
  • [16] Decentralized computation offloading via multi-agent deep reinforcement learning for NOMA-assisted mobile edge computing with energy harvesting devices
    Daghayeghi, Atousa
    Nickray, Mohsen
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 151
  • [17] Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
    Chen, Xianfu
    Zhang, Honggang
    Wu, Celimuge
    Mao, Shiwen
    Ji, Yusheng
    Bennis, Mehdi
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03): : 4005 - 4018
  • [18] Cooperative Multi-Agent Deep Reinforcement Learning for Computation Offloading in Digital Twin Satellite Edge Networks
    Ji, Zhe
    Wu, Sheng
    Jiang, Chunxiao
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (11) : 3414 - 3429
  • [19] Cooperative Multi-Agent Deep Reinforcement Learning for Computation Offloading in Digital Twin Satellite Edge Networks
    Ji, Zhe
    Wu, Sheng
    Jiang, Chunxiao
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (10) : 3414 - 3429
  • [20] NOMA-Based Multi-User Mobile Edge Computation Offloading via Cooperative Multi-Agent Deep Reinforcement Learning
    Chen, Zhao
    Zhang, Lei
    Pei, Yukui
    Jiang, Chunxiao
    Yin, Liuguo
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (01) : 350 - 364