SMART: Cost-Aware Service Migration Path Selection Based on Deep Reinforcement Learning

被引:1
|
作者
Cao, Buqing [1 ,2 ]
Ye, Hongfan [1 ,2 ,3 ]
Liu, Jianxun [1 ,2 ]
Tang, Bing [1 ,2 ]
Tao, Zhi [1 ,2 ]
Deng, Shuiguang [4 ]
机构
[1] Hunan Univ Sci & Technol, Sch Comp Sci & Engn, Xiangtan 411201, Peoples R China
[2] Hunan Univ Sci & Technol, Hunan Key Lab Serv Comp & Novel Software Technol, Xiangtan 411201, Peoples R China
[3] Huaihua Univ, Sch Comp & Artificial Intelligence, Huaihua 418000, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
关键词
Edge computing; mobile edge environments; service migration; path selection; deep Q-learning networks; EDGE; INTERNET; MEC; 5G;
D O I
10.1109/TITS.2024.3378920
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
With the large-scale commercial use of 5G technology, the era of Mobile Edge Computing with the Internet of Everything as the core is opening. Various computing resources are deployed to the edge of the network near the mobile smart terminal, forming a mobile edge environment for numerous application scenarios. Under this environment, the mobile edge network needs to use the path selection method to obtain one or more service data transmission paths and seamlessly migrates the service data to the most appropriate edge server, to ensure the continuity of edge services and reduce the resource occupation of the mobile edge network. Therefore, this paper proposes a method of Cost-awareServiceMigration Path Selection based on DeepReinforcement Learning (SMART), aiming to jointly optimize communication costs and communication delays under the premise of meeting service requirements. This method transforms the service migration path selection problem in the mobile edge environment into a bi-objective optimization problem under dual constraints, i.e., to find low-latency, low-cost and high-quality service migration paths while satisfying the constraints of computing power resources and transmission time of mobile smart terminals. Then, a DQN is used to construct the corresponding Markov chain decision model according to the problem scenario to find the optimal path for edge service migration. The proposed method learns to select the optimal edge service migration path through the interaction with the environment without obtaining a large amount of historical edge service migration path information in advance. The experimental results onShanghai (Beijing) Telecom mobile communication base station dataset and Shanghai (Beijing) taxi trajectory dataset show that the proposed method can efficiently select low-latency, low-cost, high-quality edge service migration paths in mobile edge environment when vehicles move continuously.It outperforms six typical edge service migration path selection methods, i.e.,Q-learning, A-Star, PLP, PLP/F, PLP/P, and Dijkstra, by at least 15% in all evaluation metrics except computational time.
引用
收藏
页码:12421 / 12436
页数:16
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Orchestrating Cost-Aware Reconfigurations of vRANs
    Murti, Fahri Wisnu
    Ali, Samad
    Iosifidis, George
    Latva-aho, Matti
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (01): : 200 - 216
  • [2] Cost-Aware Digital Twin Migration in Mobile Edge Computing via Deep Reinforcement Learning
    Zhang, Yuncan
    Liang, Weifa
    2024 23RD IFIP NETWORKING CONFERENCE, IFIP NETWORKING 2024, 2024, : 441 - 447
  • [3] A Deep Reinforcement Learning-Based Preemptive Approach for Cost-Aware Cloud Job Scheduling
    Cheng, Long
    Wang, Yue
    Cheng, Feng
    Liu, Cheng
    Zhao, Zhiming
    Wang, Ying
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (03): : 422 - 432
  • [4] Cost-aware job scheduling for cloud instances using deep reinforcement learning
    Feng Cheng
    Yifeng Huang
    Bhavana Tanpure
    Pawan Sawalani
    Long Cheng
    Cong Liu
    Cluster Computing, 2022, 25 : 619 - 631
  • [5] Cost-aware job scheduling for cloud inutances using deep reinforcement learning
    Cheng, Feng
    Huang, Yifeng
    Tanpure, Bhavana
    Sawalani, Pawan
    Cheng, Long
    Liu, Cong
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2022, 25 (01): : 619 - 631
  • [6] Reinforcement Learning for Cost-Aware Markov Decision Processes
    Suttle, Wesley A.
    Zhang, Kaiqing
    Yang, Zhuoran
    Kraemer, David N.
    Liu, Ji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [7] Computing and Communication Cost-Aware Service Migration Enabled by Transfer Reinforcement Learning for Dynamic Vehicular Edge Computing Networks
    Peng, Yan
    Tang, Xiaogang
    Zhou, Yiqing
    Li, Jintao
    Qi, Yanli
    Liu, Ling
    Lin, Hai
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (01) : 257 - 269
  • [8] Performance and Cost-Aware Task Scheduling via Deep Reinforcement Learning in Cloud Environment
    Zhao, Zihui
    Shi, Xiaoyu
    Shang, Mingsheng
    SERVICE-ORIENTED COMPUTING (ICSOC 2022), 2022, 13740 : 600 - 615
  • [9] Service-Aware Virtual Network Function Migration Based on Deep Reinforcement Learning
    Li, Zeming
    Liu, Ziyu
    Liang, Chengchao
    Liu, Zhanjun
    COMMUNICATIONS AND NETWORKING (CHINACOM 2021), 2022, : 481 - 496
  • [10] Security and Cost-Aware Computation Offloading via Deep Reinforcement Learning in Mobile Edge Computing
    Huang, Binbin
    Li, Yangyang
    Li, Zhongjin
    Pan, Linxuan
    Wang, Shangguang
    Xu, Yunqiu
    Hu, Haiyang
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2019, 2019