Resource Management in Multi-Cloud Scenarios via Reinforcement Learning

被引:0
|
作者
Pietrabissa, Antonio [1 ]
Battilotti, Stefano [1 ]
Facchinei, Francisco [1 ]
Giuseppi, Alessandro [1 ]
Oddi, Guido [1 ]
Panfili, Martina [1 ]
Suraci, Vincenzo [1 ]
机构
[1] Univ Roma La Sapienza, Dept Comp Control & Management Engn Antonio Ruber, Rome, Italy
来源
2015 34TH CHINESE CONTROL CONFERENCE (CCC) | 2015年
关键词
Cloud networks; Resource Management; Reinforcement Learning; Markov Decision Process;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The concept of Virtualization of Network Resources, such as cloud storage and computing power, has become crucial to any business that needs dynamic IT resources. With virtualization, we refer to the migration of various tasks, usually performed by hardware infrastructures, to virtual IT resources. This approach allows resources to be rapidly deployed, scaled and dynamically reassigned. In the last few years, the demand of cloud resources has grown dramatically, and a new figure plays a key role: the Cloud Management Broker (CMB). The CMB purpose is to manage cloud resources to meet the user's requirements and, at the same time, to optimize their usage. This paper proposes two multi-cloud resource allocation algorithms that manage the resource requests with the aim of maximizing the CMB revenue over time. The algorithms, based on Reinforcement Learning techniques, are evaluated and compared by numerical simulations.
引用
收藏
页码:9084 / 9089
页数:6
相关论文
共 50 条
  • [41] Joint Device Participation, Dataset Management, and Resource Allocation in Wireless Federated Learning via Deep Reinforcement Learning
    Chen, Jinlian
    Zhang, Jun
    Zhao, Nan
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (03) : 4505 - 4510
  • [42] Resource Scheduling for Offline Cloud Computing Using Deep Reinforcement Learning
    El-Boghdadi, Hatem M.
    Ramadan, Rabie A.
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2019, 19 (04): : 54 - 60
  • [43] Reinforcement Learning to Improve Resource Scheduling and Load Balancing in Cloud Computing
    Kaveri P.R.
    Lahande P.
    SN Computer Science, 4 (2)
  • [44] DERP: A Deep Reinforcement Learning Cloud System for Elastic Resource Provisioning
    Bitsakos, Constantinos
    Konstantinou, Ioannis
    Koziris, Nectarios
    2018 16TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM 2018), 2018, : 21 - 29
  • [45] Reinforcement Learning Approach for Optimizing Cloud Resource Utilization With Load Balancing
    Lahande, Prathamesh Vijay
    Kaveri, Parag Ravikant
    Saini, Jatinderkumar R.
    Kotecha, Ketan
    Alfarhood, Sultan
    IEEE ACCESS, 2023, 11 : 127567 - 127577
  • [46] A Reinforcement Learning-Based Resource Allocation Scheme for Cloud Robotics
    Liu, Hang
    Liu, Shiwen
    Zheng, Kan
    IEEE ACCESS, 2018, 6 : 17215 - 17222
  • [47] Efficient Adaptive Resource Provisioning for Cloud Applications using Reinforcement Learning
    John, Indu
    Bhatnagar, Shalabh
    Sreekantan, Aiswarya
    2019 IEEE 4TH INTERNATIONAL WORKSHOPS ON FOUNDATIONS AND APPLICATIONS OF SELF* SYSTEMS (FAS*W 2019), 2019, : 271 - 272
  • [48] A multi-agent deep reinforcement learning approach for optimal resource management in serverless computing
    Singh, Ashutosh Kumar
    Kumar, Satender
    Jain, Sarika
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (02):
  • [49] Resource Allocation of Multi-user Workloads in Cloud and Edge Data-Centers Using Reinforcement Learning
    Jimenez, Julian
    Soto, Paola
    De Vleeschauwer, Danny
    Chang, Chia-Yu
    De Bock, Yorick
    Latre, Steven
    Camelo, Miguel
    2023 19TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT, CNSM, 2023,
  • [50] Multi-Agent Transfer Reinforcement Learning for Resource Management in Underwater Acoustic Communication Networks
    Wang, Hui
    Wu, Hongrun
    Chen, Yingpin
    Ma, Biyang
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (02): : 2012 - 2023