Multi-agent deep reinforcement learning for computation offloading in cooperative edge network

被引:4
作者
Wu, Pengju [1 ]
Guan, Yepeng [1 ,2 ,3 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Minist Educ, Key Lab Adv Display & Syst Applicat, Shanghai 200072, Peoples R China
[3] Shanghai Univ, Minist Educ, Key Lab Silicate Cultural Rel Conservat, Shanghai 200444, Peoples R China
基金
国家重点研发计划;
关键词
Mobile edge computing; Multi-agent deep reinforcement learning; Computation offloading; RESOURCE-ALLOCATION; MOBILE; OPTIMIZATION; CLOUD;
D O I
10.1007/s10844-024-00907-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Mobile Edge Computing (MEC) has emerged as an effective paradigm for reducing latency and enhancing computational efficiency. However, the rapid proliferation of edge servers and user devices has significantly increased the complexity of task processing and resource management. Traditional task offloading approaches often rely on centralized decision-making, resulting in high computational complexity and time costs. To address these challenges, this paper introduces a dynamic collaborative framework involving multiple users and edge servers. We formulate the problem of resource allocation and task offloading as a multi-objective Markov Decision Process (MDP) with a mixed action space. To solve this, we propose a novel algorithm called Multi-Agent Mobile Edge Computing (MA-MEC), which leverages multi-agent reinforcement learning. In MA-MEC, each mobile edge server (MES) operates as an independent learning agent. Through centralized training and decentralized execution, these agents collaborate to develop efficient task offloading strategies in complex and dynamic edge environments. Simulation results demonstrate the effectiveness of our approach. MES agents learn to execute tasks more efficiently, increasing the number of processed tasks by 12.5%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, while task offloading rates rise by 17%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, and time costs are reduced by 53%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} compared to baseline methods. The proposed method shows significant advantages, especially in resource-constrained scenarios.
引用
收藏
页码:567 / 591
页数:25
相关论文
共 45 条
[31]   D2D Fogging: An Energy-Efficient and Incentive-Aware Task Offloading Framework via Network-assisted D2D Collaboration [J].
Pu, Lingjun ;
Chen, Xu ;
Xu, Jingdong ;
Fu, Xiaoming .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2016, 34 (12) :3887-3901
[32]   Resource allocation for MEC system with multi-users resource competition based on deep reinforcement learning approach [J].
Qu, Bin ;
Bai, Yan ;
Chu, Yul ;
Wang, Li-E ;
Yu, Feng ;
Li, Xianxian .
COMPUTER NETWORKS, 2022, 215
[33]   Collaborative Cloud and Edge Computing for Latency Minimization [J].
Ren, Jinke ;
Yu, Guanding ;
He, Yinghui ;
Li, Geoffrey Ye .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) :5031-5044
[34]  
Scott J., 2022, CRAWDAD CAMBRIDGEHAG, DOI [10.15783/C70011, DOI 10.15783/C70011]
[35]  
SHANNON CE, 1948, BELL SYST TECH J, V27, P379, DOI DOI 10.1002/J.1538-7305.1948.TB01338.X
[36]   AoI and Energy Tradeoff for Aerial-Ground Collaborative MEC: A Multi-Objective Learning Approach [J].
Song, Fuhong ;
Yang, Qixun ;
Deng, Mingsen ;
Xing, Huanlai ;
Liu, Yanping ;
Yu, Xi ;
Li, Kaiju ;
Xu, Lexi .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) :11278-11294
[37]   Energy-Efficient Trajectory Optimization With Wireless Charging in UAV-Assisted MEC Based on Multi-Objective Reinforcement Learning [J].
Song, Fuhong ;
Deng, Mingsen ;
Xing, Huanlai ;
Liu, Yanping ;
Ye, Fei ;
Xiao, Zhiwen .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) :10867-10884
[38]   Multi-Agent Deep Reinforcement Learning for Cooperative Computing Offloading and Route Optimization in Multi Cloud-Edge Networks [J].
Suzuki, Akito ;
Kobayashi, Masahiro ;
Oki, Eiji .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04) :4416-4434
[39]   Multi-Agent Reinforcement Learning for Distributed Resource Allocation in Cell-Free Massive MIMO-Enabled Mobile Edge Computing Network [J].
Tilahun, Fitsum Debebe ;
Abebe, Ameha Tsegaye ;
Kang, Chung G. .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (12) :16454-16468
[40]   Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks [J].
Tran, Tuyen X. ;
Pompili, Dario .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (01) :856-868