A Deep Recurrent-Reinforcement Learning Method for Intelligent AutoScaling of Serverless Functions

被引:4
|
作者
Agarwal, Siddharth [1 ]
Rodriguez, Maria A. [1 ]
Buyya, Rajkumar [1 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Cloud Comp & Distributed Syst CLOUDS Lab, Melbourne, Vic 3010, Australia
关键词
Serverless computing; function-as-a-service; AutoScaling; reinforcement learning; constraint-awareness;
D O I
10.1109/TSC.2024.3387661
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Function-as-a-Service (FaaS) introduces a lightweight, function-based cloud execution model that finds its relevance in a range of applications like IoT-edge data processing and anomaly detection. While cloud service providers (CSPs) offer a near-infinite function elasticity, these applications often experience fluctuating workloads and stricter performance constraints. A typical CSP strategy is to empirically determine and adjust desired function instances or resources, known as autoscaling, based on monitoring-based thresholds such as CPU or memory, to cope with demand and performance. However, threshold configuration either requires expert knowledge, historical data or a complete view of the environment, making autoscaling a performance bottleneck that lacks an adaptable solution. Reinforcement learning (RL) algorithms are proven to be beneficial in analysing complex cloud environments and result in an adaptable policy that maximizes the expected objectives. Most realistic cloud environments usually involve operational interference and have limited visibility, making them partially observable. A general solution to tackle observability in highly dynamic settings is to integrate Recurrent units with model-free RL algorithms and model a decision process as a Partially Observable Markov Decision Process (POMDP). Therefore, in this article, we investigate model-free Recurrent RL agents for function autoscaling and compare them against the model-free Proximal Policy Optimisation (PPO) algorithm. We explore the integration of a Long-Short Term Memory (LSTM) network with the state-of-the-art PPO algorithm to find that under our experimental and evaluation settings, recurrent policies were able to capture the environment parameters and show promising results for function autoscaling. We further compare a PPO-based autoscaling agent with commercially used threshold-based function autoscaling and posit that a LSTM-based autoscaling agent is able to improve throughput by 18%, function execution by 13% and account for 8.4% more function instances.
引用
收藏
页码:1899 / 1910
页数:12
相关论文
共 50 条
  • [21] Deep-Reinforcement-Learning-Based Intelligent Routing Strategy for FANETs
    Lin, Deping
    Peng, Tao
    Zuo, Peiliang
    Wang, Wenbo
    SYMMETRY-BASEL, 2022, 14 (09):
  • [22] Double Deep Recurrent Reinforcement Learning Method &Cybertwin-Based Model for Predicting Crop Yields
    Anusha, D. J.
    Anandan, R.
    Krishna, P. Venkata
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14
  • [23] A Multi-AGV Routing Planning Method Based on Deep Reinforcement Learning and Recurrent Neural Network
    Lin, Yishuai
    Hue, Gang
    Wang, Liang
    Li, Qingshan
    Zhu, Jiawei
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (07) : 1720 - 1722
  • [24] Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning
    Zhang, Yufan
    Ai, Qian
    Li, Zhaoyu
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2024, 10 (06): : 2621 - 2630
  • [25] Intelligent Control of Construction Manufacturing Processes using Deep Reinforcement Learning
    Flood, Ian
    Flood, Paris D. L.
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON SIMULATION AND MODELING METHODOLOGIES, TECHNOLOGIES AND APPLICATIONS (SIMULTECH), 2022, : 112 - 122
  • [26] Toward Intelligent Multizone Thermal Control With Multiagent Deep Reinforcement Learning
    Li, Jie
    Zhang, Wei
    Gao, Guanyu
    Wen, Yonggang
    Jin, Guangyu
    Christopoulos, Georgios
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (14) : 11150 - 11162
  • [27] Research on Method of Intelligent Radar Confrontation Based on Reinforcement Learning
    Xing Qiang
    Zhu Weigang
    Jia Xin
    2017 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA), 2017, : 471 - 475
  • [28] An Intelligent Control Method for Servo Motor Based on Reinforcement Learning
    Gao, Depeng
    Wang, Shuai
    Yang, Yuwei
    Zhang, Haifei
    Chen, Hao
    Mei, Xiangxiang
    Chen, Shuxi
    Qiu, Jianlin
    ALGORITHMS, 2024, 17 (01)
  • [29] An Intelligent Tracking Method of Rotor UAV Based on Reinforcement Learning
    Shi H.-B.
    Xu M.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2019, 48 (04): : 553 - 559
  • [30] Robust Multimodal Image Registration Using Deep Recurrent Reinforcement Learning
    Sun, Shanhui
    Hu, Jing
    Yao, Mingqing
    Hu, Jinrong
    Yang, Xiaodong
    Song, Qi
    Wu, Xi
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 511 - 526