Service caching with multi-agent reinforcement learning in cloud-edge collaboration computing

被引:0
作者
Li, Yinglong [1 ]
Zhang, Zhengjiang [1 ]
Chao, Han-Chieh [2 ,3 ,4 ,5 ,6 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
[2] Tamkang Univ, Taipei 251, Taiwan
[3] Fo Guang Univ, Dept Appl Informat, Yilan, Taiwan
[4] Tamkang Univ, Dept Artificial Intelligence, New Taipei, Taiwan
[5] Natl Dong Hwa Univ, Hualien, Taiwan
[6] UCSI, Kuala Lumpur, Malaysia
基金
中国国家自然科学基金;
关键词
Edge computing; Service caching; Resource allocating; Multi agent reinforcement learning; PLACEMENT;
D O I
10.1007/s12083-025-01915-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge computing moves application services from the central cloud to the network edge, significantly reducing service latency. Edge service caching presents a more complex challenge than cloud caching, due to the dynamics and diversity of mobile user requests. Consequently, traditional caching strategies are not directly applicable to edge environments. Additionally, the challenge intensifies when considering collaborative caching between adjacent servers. To address these challenge, we propose an edge service caching solution aimed at minimizing the total service delay to ensure high quality user experiences. First, given the limited prior information on user requests in the current time period, we adopt a Transformer-based approach to enhance the accuracy of user request predictions. Since the service caching problem involves both continuous and discrete action spaces, we propose a deep reinforcement learning algorithm based on hybrid Soft actor-critic (SAC) to learn the optimal caching strategy. We then leverage a centralized training and decentralized decision making framework to address multi-agent problems, while selectively reducing agent observation connections to avoid the interference from redundant observations. Finally, extensive simulations demonstrate that our proposed collaborative cloud-edge service caching strategy reduces service latency more effectively than existing approaches.
引用
收藏
页数:13
相关论文
共 36 条
[1]   A comprehensive review on Internet of Things application placement in Fog computing environment [J].
Apat, Hemant Kumar ;
Nayak, Rashmiranjan ;
Sahoo, Bibhudatta .
INTERNET OF THINGS, 2023, 23
[2]   Service Caching and Computation Reuse Strategies at the Edge: A Survey [J].
Barrios, Carlos ;
Kumar, Mohan .
ACM COMPUTING SURVEYS, 2024, 56 (02)
[3]   Joint Optimization of Service Caching Placement and Computation Offloading in Mobile Edge Computing Systems [J].
Bi, Suzhi ;
Huang, Liang ;
Zhang, Ying-Jun Angela .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (07) :4947-4963
[4]   The Online Multi-Commodity Facility Location Problem [J].
Castenow, Jannik ;
Feldkord, Bjoern ;
Knollmann, Till ;
Malatyali, Manuel ;
der Heide, Friedhelm Meyer auf .
PROCEEDINGS OF THE 32ND ACM SYMPOSIUM ON PARALLELISM IN ALGORITHMS AND ARCHITECTURES (SPAA '20), 2020, :129-139
[5]   Joint Multi-Task Offloading and Resource Allocation for Mobile Edge Computing Systems in Satellite IoT [J].
Chai, Furong ;
Zhang, Qi ;
Yao, Haipeng ;
Xin, Xiangjun ;
Gao, Ran ;
Guizani, Mohsen .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (06) :7783-7795
[6]   Joint Caching and Computing Service Placement for Edge-Enabled IoT Based on Deep Reinforcement Learning [J].
Chen, Yan ;
Sun, Yanjing ;
Yang, Bin ;
Taleb, Tarik .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) :19501-19514
[7]  
Gao H., 2023, IEEE Transactions on Vehicular Technology, P1
[8]   Multi-agent deep reinforcement learning: a survey [J].
Gronauer, Sven ;
Diepold, Klaus .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) :895-943
[9]  
Hao YX, 2021, IEEE T IND INFORM, V17, P5552, DOI [10.1109/tii.2020.3041713, 10.1109/TII.2020.3041713]
[10]  
Hassan SS, 2023, IEEE Trans. Mobile Comput.