Latency-Aware Kubernetes Scheduling for Microservices Orchestration at the Edge

被引:6
作者
Centofanti, C. [1 ]
Tiberti, W. [1 ]
Marotta, A. [1 ]
Graziosi, F. [1 ]
Cassioli, D. [1 ]
机构
[1] Univ Aquila, Dept Informat Engn Comp Sci & Math, I-67100 Laquila, Italy
来源
2023 IEEE 9TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT | 2023年
基金
欧盟地平线“2020”;
关键词
Edge; Orchestration; Kubernetes; Latency; Service Placement;
D O I
10.1109/NetSoft57336.2023.10175431
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network and computing infrastructures are nowadays challenged to meet the increasingly stringent requirements of novel applications. One of the most critical aspect is optimizing the latency perceived by the end-user accessing the services. New network architectures offer a natural framework for the efficient orchestration of microservices. However, how to incorporate accurate latency metrics into orchestration decisions still represents an open challenge. In this work we propose a novel architectural approach to perform scheduling operations in Kubernetes environment. Existing approaches proposed the collection of network metrics, e.g. latency between nodes in the cluster, via purposely-built external measurement services deployed in the cluster. Compared to other approaches the proposed one: (i) collects performance metrics at the application layer instead of network layer; (ii) relies on latency measurements performed inside the service of interest instead of utilizing external measurement services; (iii) takes scheduling decisions based on effective end-user perceived latency instead of considering the latency between cluster nodes. We show the effectiveness of our approach by adopting an iterative discovery strategy able to dynamically determine which node operates with the lowest latency for the Kubernetes pod placement.
引用
收藏
页码:426 / 431
页数:6
相关论文
共 50 条
  • [1] Latency-Aware Industrial Fog Application Orchestration with Kubernetes
    Eidenbenz, Raphael
    Pignolet, Yvonne-Anne
    Ryser, Alain
    2020 FIFTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING (FMEC), 2020, : 164 - 171
  • [2] JANUS: Latency-Aware Traffic Scheduling for IoT Data Streaming in Edge Environments
    Wen, Zhenyu
    Yang, Renyu
    Qian, Bin
    Xuan, Yubo
    Lu, Lingling
    Wang, Zheng
    Peng, Hao
    Xu, Jie
    Zomaya, Albert Y.
    Ranjan, Rajiv
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4302 - 4316
  • [3] Latency-Aware Offloading for Mobile Edge Computing Networks
    Feng, Wei
    Liu, Hao
    Yao, Yingbiao
    Cao, Diqiu
    Zhao, Mingxiong
    IEEE COMMUNICATIONS LETTERS, 2021, 25 (08) : 2673 - 2677
  • [4] Latency-Aware and Proactive Service Placement for Edge Computing
    Sfaxi, Henda
    Lahyani, Imene
    Yangui, Sami
    Torjmen, Mouna
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (04): : 4243 - 4254
  • [5] Latency-Aware Adaptive Video Summarization for Mobile Edge Clouds
    Wang, Ying
    Dong, Yifan
    Guo, Songtao
    Yang, Yuanyuan
    Liao, Xiaofeng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (05) : 1193 - 1207
  • [6] Online Reconfiguration of Latency-Aware IoT Services in Edge Networks
    Li, Xiaocui
    Zhou, Zhangbing
    Zhu, Chunsheng
    Shu, Lei
    Zhou, Jiehan
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (18) : 17035 - 17046
  • [7] Latency-Aware Task Assignment and Scheduling in Collaborative Cloud Robotic Systems
    Li, Shenghui
    Zheng, Zhiheng
    Chen, Wuhui
    Zheng, Zibin
    Wang, Junbo
    PROCEEDINGS 2018 IEEE 11TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD), 2018, : 65 - 72
  • [8] LMM: latency-aware micro-service mashup in mobile edge computing environment
    Zhou, Ao
    Wang, Shangguang
    Wan, Shaohua
    Qi, Lianyong
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (19) : 15411 - 15425
  • [9] Towards an internet-scale overlay network for latency-aware decentralized workflows at the edge
    Kathiravelu, Pradeeban
    Zaiman, Zachary
    Gichoya, Judy
    Veiga, Luis
    Banerjee, Imon
    COMPUTER NETWORKS, 2022, 203
  • [10] LMM: latency-aware micro-service mashup in mobile edge computing environment
    Ao Zhou
    Shangguang Wang
    Shaohua Wan
    Lianyong Qi
    Neural Computing and Applications, 2020, 32 : 15411 - 15425