LMM: latency-aware micro-service mashup in mobile edge computing environment

被引:54
作者
Zhou, Ao [1 ]
Wang, Shangguang [1 ]
Wan, Shaohua [2 ]
Qi, Lianyong [3 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, Peoples R China
[2] Zhongnan Univ Econ & Law, Sch Informat & Safety Engn, Wuhan, Hubei, Peoples R China
[3] Qufu Normal Univ, Sch Informat Sci & Engn, Rizhao, Peoples R China
关键词
Micro-service; Mobile edge computing; Network resource consumption; Latency; Mashup; RESOURCE-ALLOCATION; SMART CITY; CLOUD;
D O I
10.1007/s00521-019-04693-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Internet of Things (IoT) applications introduce a set of stringent requirements (e.g., low latency, high bandwidth) to network and computing paradigm. 5G networks are faced with great challenges for supporting IoT services. The centralized cloud computing paradigm also becomes inefficient for those stringent requirements. Only extending spectrum resources cannot solve the problem effectively. Mobile edge computing offers an IT service environment at the Radio Access Network edge and presents great opportunities for the development of IoT applications. With the capability to reduce latency and offer an improved user experience, mobile edge computing becomes a key technology toward 5G. To achieve abundant sharing, complex IoT applications have been implemented as a set of lightweight micro-services that are distributed among containers over the mobile edge network. How to produce the optimal collocation of suitable micro-service for an application in mobile edge computing environment is an important issue that should be addressed. To address this issue, we propose a latency-aware micro-service mashup approach in this paper. Firstly, the problem is formulated into an integer nonlinear programming. Then, we prove the NP-hardness of the problem by reducing it into the delay constrained least cost problem. Finally, we propose an approximation latency-aware micro-service mashup approach to solve the problem. Experiment results show that the proposed approach achieves a substantial reduction in network resource consumption while still ensuring the latency constraint.
引用
收藏
页码:15411 / 15425
页数:15
相关论文
共 45 条
[1]  
[Anonymous], 2017, I C COMM SOFTW NET, DOI DOI 10.1109/TSC.2017.2662008
[2]  
[Anonymous], 2012, Proc. of the third ACM workshop on Mobile cloud computing and services
[3]   Low-time complexity budget-deadline constrained workflow scheduling on heterogeneous resources [J].
Arabnejad, Hamid ;
Barbosa, Jorge G. ;
Prodan, Radu .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2016, 55 :29-40
[4]   Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing [J].
Beloglazov, Anton ;
Abawajy, Jemal ;
Buyya, Rajkumar .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2012, 28 (05) :755-768
[5]  
Benchara FZ, 2016, INT CONF MULTIMED, P354, DOI 10.1109/ICMCS.2016.7905644
[6]  
Bhamare D, 2017, IEEE ICC
[7]  
Buqing Cao, 2013, 2013 IEEE 20th International Conference on Web Services (ICWS), P99, DOI 10.1109/ICWS.2013.23
[8]   A game-theoretic approach to computation offloading in mobile cloud computing [J].
Cardellini, Valeria ;
Persone, Vittoria De Nitto ;
Di Valerio, Valerio ;
Facchinei, Francisco ;
Grassi, Vincenzo ;
Lo Presti, Francesco ;
Piccialli, Veronica .
MATHEMATICAL PROGRAMMING, 2016, 157 (02) :421-449
[9]   A flexible QoS-aware Web service composition method by multi-objective optimization in cloud manufacturing [J].
Chen, Fuzan ;
Dou, Runliang ;
Li, Minqiang ;
Wu, Harris .
COMPUTERS & INDUSTRIAL ENGINEERING, 2016, 99 :423-431
[10]   Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing [J].
Chen, Xu ;
Jiao, Lei ;
Li, Wenzhong ;
Fu, Xiaoming .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2016, 24 (05) :2827-2840