UAV-enabled computation migration for complex missions: A reinforcement learning approach

被引:25
|
作者
Zhu, Shichao [1 ,2 ]
Gui, Lin [1 ]
Cheng, Nan [3 ]
Zhang, Qi [1 ]
Sun, Fei [1 ]
Lang, Xiupu [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Sci & Technol Commun Network Lab, Shanghai 200240, Peoples R China
[3] Xidian Univ, Sch Telecommun, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
decision making; learning (artificial intelligence); autonomous aerial vehicles; remotely operated vehicles; Markov processes; UAV-enabled computation migration; complex missions; reinforcement learning approach; computation offloading; remote areas; traditional edge infrastructures; unmanned aerial vehicle-enabled edge; near-users edge computing service; computation migration problem; typical task-flows; proper UAV; UAV-ground communication data rate; UAV location; missions response time; computation migration decision making problem; advantage actor-critic reinforcement; average response time; ENERGY; EDGE; OPTIMIZATION; NETWORKS; DESIGN;
D O I
10.1049/iet-com.2019.1188
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The implementationof computation offloading is a challenging issue in the remote areas where traditional edge infrastructures are sparsely deployed. In this study, the authors propose a unmanned aerial vehicle (UAV)-enabled edge computing framework, where a group of UAVs fly around to provide the near-users edge computing service. They study the computation migration problem for the complex missions, which can be decomposed as some typical task-flows considering the inter-dependency of tasks. Each time a task appears, it should be allocated to a proper UAV for execution, which is defined as the computation migration or task migration. Since the UAV-ground communication data rate is strongly associated with the UAV location, selecting a proper UAV to execute each task will largely benefit the missions response time. They formulate the computation migration decision making problem as a Markov decision process, in which the state contains the extracted observations from the environment. To cope with the dynamics of the environment, they propose an advantage actor-critic reinforcement learning approach to learn the near-optimal policy on-the-fly. Simulation results show that the proposed approach has a desirable convergence property, and can significantly reduce the average response time of missions compared with the benchmark greedy method.
引用
收藏
页码:2472 / 2480
页数:9
相关论文
共 50 条
  • [1] Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach
    Shi, Shuo
    Wang, Meng
    Gu, Shushi
    Zheng, Zhong
    WIRELESS NETWORKS, 2024, 30 (05) : 3921 - 3934
  • [2] Trajectory optimization for UAV-enabled relaying with reinforcement learning
    Chiya Zhang
    Xinjie Li
    Chunlong He
    Xingquan Li
    Dongping Lin
    Digital Communications and Networks, 2025, 11 (01) : 200 - 209
  • [3] Trajectory Design for UAV-Enabled Maritime Secure Communications: A Reinforcement Learning Approach
    Liu, Jintao
    Zeng, Feng
    Wang, Wei
    Sheng, Zhichao
    Wei, Xinchen
    Cumanan, Kanapathippillai
    CHINA COMMUNICATIONS, 2022, 19 (09) : 26 - 36
  • [4] Trajectory Design for UAV-Enabled Maritime Secure Communications: A Reinforcement Learning Approach
    Jintao Liu
    Feng Zeng
    Wei Wang
    Zhichao Sheng
    Xinchen Wei
    Kanapathippillai Cumanan
    China Communications, 2022, 19 (09) : 26 - 36
  • [5] Computation Offloading in UAV-Enabled Edge Computing: A Stackelberg Game Approach
    Yuan, Xinwang
    Xie, Zhidong
    Tan, Xin
    SENSORS, 2022, 22 (10)
  • [6] UAV-Enabled Energy-Efficient Aerial Computing: A Federated Deep Reinforcement Learning Approach
    Wu, Qianqian
    Liu, Qiang
    He, Ying
    Wu, Zefan
    IEEE TRANSACTIONS ON RELIABILITY, 2024,
  • [7] Hybrid UAV-Enabled Secure Offloading via Deep Reinforcement Learning
    Yoo, Seonghoon
    Jeong, Seongah
    Kang, Joonhyuk
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (06) : 972 - 976
  • [8] Cache Sharing in UAV-Enabled Cellular Network: A Deep Reinforcement Learning-Based Approach
    Muslih, Hamidullah
    Kazmi, S. M. Ahsan
    Mazzara, Manuel
    Baye, Gaspard
    IEEE ACCESS, 2024, 12 : 43422 - 43435
  • [9] UAV-Enabled Mobile Radiation Source Tracking with Deep Reinforcement Learning
    Gu, Jiangchun
    Wang, Haichao
    Ding, Guoru
    Xu, Yitao
    Jiao, Yutao
    2020 12TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2020, : 672 - 678
  • [10] UAV-Enabled Asynchronous Federated Learning
    Zhai, Zhiyuan
    Yuan, Xiaojun
    Wang, Xin
    Yang, Huiyuan
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2025, 24 (03) : 2358 - 2372