Skyline-Enhanced Deep Reinforcement Learning Approach for Energy-Efficient and QoS-Guaranteed Multi-Cloud Service Composition

被引:6
作者
Ma, Wenhao [1 ]
Xu, Hongzhen [1 ,2 ,3 ]
机构
[1] East China Univ Technol, Sch Informat Engn, Nanchang 330013, Peoples R China
[2] East China Univ Technol, Sch Software, Nanchang 330013, Peoples R China
[3] Jiangxi Key Lab Cybersecur Intelligent Percept, Nanchang 330013, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 11期
关键词
cloud service composition; multi-cloud; deep reinforcement learning; skyline; energy consumption; QoS aware; NEURAL-NETWORKS; ALGORITHM;
D O I
10.3390/app13116826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Cloud computing has experienced rapid growth in recent years and has become a critical computing paradigm. Combining multiple cloud services to satisfy complex user requirements has become a research hotspot in cloud computing. Service composition in multi-cloud environments is characterized by high energy consumption, which brings attention to the importance of energy consumption in cross-cloud service composition. Nonetheless, prior research has mainly focused on finding a service composition that maximizes the quality of service (QoS) and overlooks the energy consumption generated during service invocation. Additionally, the dynamic nature of multi-cloud environments challenges the adaptability and scalability of cloud service composition methods. Therefore, we propose the skyline-enhanced deep reinforcement learning approach (SkyDRL) to address these challenges. Our approach defines an energy consumption model for cloud service composition in multi-cloud environments. The branch and bound skyline algorithm is leveraged to reduce the search space and training time. Additionally, we enhance the basic deep Q-network (DQN) algorithm by incorporating double DQN to address the overestimation problem, incorporating Dueling Network and Prioritized Experience Replay to speed up training and improve stability. We evaluate our proposed method using comparative experiments with existing methods. Our results demonstrate that our approach effectively reduces energy consumption in cloud service composition while maintaining good adaptability and scalability in service composition problems. According to the experimental results, our approach outperforms the existing approaches by demonstrating energy savings ranging from 8% to 35%.
引用
收藏
页数:28
相关论文
共 38 条
  • [21] Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning
    Liu, Chi Harold
    Ma, Xiaoxin
    Gao, Xudong
    Tang, Jian
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (06) : 1274 - 1285
  • [22] Curiosity-Driven Energy-Efficient Worker Scheduling in Vehicular Crowdsourcing: A Deep Reinforcement Learning Approach
    Liu, Chi Harold
    Zhao, Yinuo
    Dai, Zipeng
    Yuan, Ye
    Wang, Guoren
    Wu, Dapeng
    Leung, Kin K.
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 25 - 36
  • [23] Energy-Efficient Virtual Network Embedding: A Deep Reinforcement Learning Approach Based on Graph Convolutional Networks
    Zhang, Peiying
    Wang, Enqi
    Luo, Zhihu
    Bi, Yanxian
    Liu, Kai
    Wang, Jian
    ELECTRONICS, 2024, 13 (10)
  • [24] Joint power allocation and MCS selection for energy-efficient link adaptation: A deep reinforcement learning approach
    Parsa, Ali
    Moghim, Neda
    Salavati, Pouyan
    COMPUTER NETWORKS, 2022, 218
  • [25] Cloud-SEnergy: A bin-packing based multi-cloud service broker for energy efficient composition and execution of data-intensive applications
    Baker, Thar
    Aldawsari, Bandar
    Asim, Muhammad
    Tawfik, Hissam
    Maamar, Zakaria
    Buyya, Rajkumar
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2018, 19 : 242 - 252
  • [26] Energy-efficient collaborative task offloading in multi-access edge computing based on deep reinforcement learning
    Wang, Shudong
    Zhao, Shengzhe
    Gui, Haiyuan
    He, Xiao
    Lu, Zhi
    Chen, Baoyun
    Fan, Zixuan
    Pang, Shanchen
    AD HOC NETWORKS, 2025, 169
  • [27] A deep multi-agent reinforcement learning approach for the micro-service migration problem with affinity in the cloud
    Ma, Ning
    Tang, Angjun
    Xiong, Zifeng
    Jiang, Fuxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273
  • [28] Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach
    Shi, Shuo
    Wang, Meng
    Gu, Shushi
    Zheng, Zhong
    WIRELESS NETWORKS, 2024, 30 (05) : 3921 - 3934
  • [29] Energy-efficient multi-pass cutting parameters optimisation for aviation parts in flank milling with deep reinforcement learning
    Lu, Fengyi
    Zhou, Guanghui
    Zhang, Chao
    Liu, Yang
    Chang, Fengtian
    Xiao, Zhongdong
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 81
  • [30] 3D Autonomous Navigation of UAVs: An Energy-Efficient and Collision-Free Deep Reinforcement Learning Approach
    Wang, Yubin
    Biswas, Karnika
    Zhang, Liwen
    Ghazzai, Hakim
    Massoud, Yehia
    2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, : 404 - 408