BATCH: Machine Learning Inference Serving on Serverless Platforms with Adaptive Batching

被引:99
作者
Ali, Ahsan [1 ]
Pinciroli, Riccardo [2 ]
Yan, Feng [1 ]
Smirni, Evgenia [2 ]
机构
[1] Univ Nevada, Reno, NV 89557 USA
[2] William & Mary, Williamsburg, VA USA
来源
PROCEEDINGS OF SC20: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SC20) | 2020年
基金
美国国家科学基金会;
关键词
Machine-learning-as-a-service (MLaaS); Inference; Serving; Belching; Cloud; Serverless; Service Level Objective (SLO); Cost-effective; Optimization; Modeling; Prediction;
D O I
10.1109/SC41405.2020.00073
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Serverless computing is a new pay-per-use cloud service paradigm that automates resource scaling for stateless functions and can potentially facilitate bursty machine learning serving. Batching is critical for latency performance and cost-effectiveness of machine learning inference, but unfortunately it is not supported by existing serverless platforms due to their stateless design. Our experiments show that without batching, machine learning serving cannot reap the benefits of serverless computing. In this paper, we present BATCH, a framework for supporting efficient machine learning serving on serverless platforms. BATCH uses an optimizer to provide inference tail latency guarantees and cost optimization and to enable adaptive batching support. We prototype BATCH atop of AWS Lambda and popular machine learning inference systems. The evaluation verities the accuracy of the analytic optimizer and demonstrates performance and cost advantages over the state-of-the-art method MArk and the state-of-the-practice tool SageMaker.
引用
收藏
页数:15
相关论文
共 70 条
  • [41] Mi NF, 2008, LECT NOTES COMPUT SC, V5346, P265
  • [42] Enhancing Data Availability in Disk Drives through Background Activities
    Mi, Ningfang
    Riska, Alma
    Smirni, Evgenia
    Riedel, Erik
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS & NETWORKS WITH FTCS & DCC, 2008, : 492 - +
  • [43] Mohan A., 2019, 11 USENIX WORKSH HOT
  • [44] Neubig G, 2017, ADV NEUR IN, V30
  • [45] Neuts M.F., 1989, Structured Stochastic Matrices of M/G/1 Type and Their Applications, Probability: Pure and Applied, V5
  • [46] VERSATILE MARKOVIAN POINT PROCESS
    NEUTS, MF
    [J]. JOURNAL OF APPLIED PROBABILITY, 1979, 16 (04) : 764 - 779
  • [47] Efficient management of idleness in storage systems
    Mi, Ningfang
    Riska, Alma
    Zhang, Qi
    Smirni, Evgenia
    Riedel, Erik
    [J]. ACM Transactions on Storage, 2009, 5 (02) : 1 - 25
  • [48] Oakes E, 2018, PROCEEDINGS OF THE 2018 USENIX ANNUAL TECHNICAL CONFERENCE, P57
  • [49] Kappa - Serverless IoT Deployment
    Persson, Per
    Angelsmark, Ola
    [J]. PROCEEDINGS OF THE 2ND INTERNATIONAL WORKSHOP ON SERVERLESS COMPUTING (WOSC '17), 2017, : 16 - 21
  • [50] Phan H., 2019, ARXIV PREPRINT ARXIV