LMM: latency-aware micro-service mashup in mobile edge computing environment

被引:0
作者
Ao Zhou
Shangguang Wang
Shaohua Wan
Lianyong Qi
机构
[1] Beijing University of Posts and Telecommunications,State Key Laboratory of Networking, and Switching Technology
[2] Zhongnan University of Economics and Law,School of Information, and Safety Engineering
[3] Qufu Normal University,School of Information, Science and Engineering
来源
Neural Computing and Applications | 2020年 / 32卷
关键词
Micro-service; Mobile edge computing; Network resource consumption; Latency; Mashup;
D O I
暂无
中图分类号
学科分类号
摘要
Internet of Things (IoT) applications introduce a set of stringent requirements (e.g., low latency, high bandwidth) to network and computing paradigm. 5G networks are faced with great challenges for supporting IoT services. The centralized cloud computing paradigm also becomes inefficient for those stringent requirements. Only extending spectrum resources cannot solve the problem effectively. Mobile edge computing offers an IT service environment at the Radio Access Network edge and presents great opportunities for the development of IoT applications. With the capability to reduce latency and offer an improved user experience, mobile edge computing becomes a key technology toward 5G. To achieve abundant sharing, complex IoT applications have been implemented as a set of lightweight micro-services that are distributed among containers over the mobile edge network. How to produce the optimal collocation of suitable micro-service for an application in mobile edge computing environment is an important issue that should be addressed. To address this issue, we propose a latency-aware micro-service mashup approach in this paper. Firstly, the problem is formulated into an integer nonlinear programming. Then, we prove the NP-hardness of the problem by reducing it into the delay constrained least cost problem. Finally, we propose an approximation latency-aware micro-service mashup approach to solve the problem. Experiment results show that the proposed approach achieves a substantial reduction in network resource consumption while still ensuring the latency constraint.
引用
收藏
页码:15411 / 15425
页数:14
相关论文
共 140 条
  • [1] Kitchin R(2014)The real-time city? Big data and smart urbanism GeoJournal 79 1-14
  • [2] Jin J(2014)An information framework for creating a smart city through internet of things IEEE Internet Things J 1 112-121
  • [3] Gubbi J(2014)Smartsantander: Iot experimentation over a smart city testbed Comput Netw 61 217-238
  • [4] Marusic S(2019)A rhombic dodecahedron topology for human-centric banking big data IEEE Trans Comput Soc Syst 6 1095-1105
  • [5] Palaniswami M(2019)Blockchain-based data privacy management with nudge theory in open banking Future Gener Comput Syst 10 902-913
  • [6] Sanchez L(2017)Cloud service reliability enhancement via virtual machine placement optimization IEEE Trans Serv Comput 28 755-768
  • [7] Muoz L(2012)Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing Future Gener Comput Syst 4 5896-5907
  • [8] Galache JA(2016)Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks IEEE Access 16 1397-1411
  • [9] Sotres P(2017)Energy-efficient resource allocation for mobile-edge computation offloading IEEE Trans Wirel Commun 11 249-261
  • [10] Santana JR(2018)A new deep learning-based food recognition system for dietary assessment on an edge computing service infrastructure IEEE Trans Serv Comput 9 551-565