Skyline-Enhanced Deep Reinforcement Learning Approach for Energy-Efficient and QoS-Guaranteed Multi-Cloud Service Composition

被引:6
作者
Ma, Wenhao [1 ]
Xu, Hongzhen [1 ,2 ,3 ]
机构
[1] East China Univ Technol, Sch Informat Engn, Nanchang 330013, Peoples R China
[2] East China Univ Technol, Sch Software, Nanchang 330013, Peoples R China
[3] Jiangxi Key Lab Cybersecur Intelligent Percept, Nanchang 330013, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 11期
关键词
cloud service composition; multi-cloud; deep reinforcement learning; skyline; energy consumption; QoS aware; NEURAL-NETWORKS; ALGORITHM;
D O I
10.3390/app13116826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Cloud computing has experienced rapid growth in recent years and has become a critical computing paradigm. Combining multiple cloud services to satisfy complex user requirements has become a research hotspot in cloud computing. Service composition in multi-cloud environments is characterized by high energy consumption, which brings attention to the importance of energy consumption in cross-cloud service composition. Nonetheless, prior research has mainly focused on finding a service composition that maximizes the quality of service (QoS) and overlooks the energy consumption generated during service invocation. Additionally, the dynamic nature of multi-cloud environments challenges the adaptability and scalability of cloud service composition methods. Therefore, we propose the skyline-enhanced deep reinforcement learning approach (SkyDRL) to address these challenges. Our approach defines an energy consumption model for cloud service composition in multi-cloud environments. The branch and bound skyline algorithm is leveraged to reduce the search space and training time. Additionally, we enhance the basic deep Q-network (DQN) algorithm by incorporating double DQN to address the overestimation problem, incorporating Dueling Network and Prioritized Experience Replay to speed up training and improve stability. We evaluate our proposed method using comparative experiments with existing methods. Our results demonstrate that our approach effectively reduces energy consumption in cloud service composition while maintaining good adaptability and scalability in service composition problems. According to the experimental results, our approach outperforms the existing approaches by demonstrating energy savings ranging from 8% to 35%.
引用
收藏
页数:28
相关论文
共 46 条
[21]   PPDRL: A Pretraining-and-Policy-Based Deep Reinforcement Learning Approach for QoS-Aware Service Composition [J].
Yi, Kan ;
Yang, Jin ;
Wang, Shuangling ;
Zhang, Zhengtong ;
Ren, Xiao .
SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
[22]   Enhanced time-aware QoS prediction in multi-cloud: a hybrid k-medoids and lazy learning approach (QoPC) [J].
Amin Keshavarzi ;
Abolfazl Toroghi Haghighat ;
Mahdi Bohlouli .
Computing, 2020, 102 :923-949
[23]   Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning [J].
Liu, Chi Harold ;
Ma, Xiaoxin ;
Gao, Xudong ;
Tang, Jian .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (06) :1274-1285
[24]   Energy-efficient strategy generation for smart jammer in non-zero-sum games: A deep reinforcement learning approach [J].
Peng, Xiang ;
Xu, Hua ;
Qi, Zisen ;
Wang, Dan ;
Zhang, Yue ;
Pang, Yiqiong .
COMPUTER NETWORKS, 2025, 270
[25]   Enhancing Ice-Snow Consumer Electronics With AIoT: A Deep Reinforcement Learning Approach for Robust and Energy-Efficient Performance [J].
Tian, Yunping ;
Manimurugan, S. ;
Narmatha, C. .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2025, 71 (01) :1909-1917
[26]   Energy-Efficient Resource Management for Multi-UAV NOMA Networks Based on Deep Reinforcement Learning [J].
Lin, Xiangda ;
Yang, Helin ;
Lin, Kailong ;
Xiao, Liang ;
Shi, Zhaoyuan ;
Yang, Wanting ;
Xiong, Zehui .
2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
[27]   Curiosity-Driven Energy-Efficient Worker Scheduling in Vehicular Crowdsourcing: A Deep Reinforcement Learning Approach [J].
Liu, Chi Harold ;
Zhao, Yinuo ;
Dai, Zipeng ;
Yuan, Ye ;
Wang, Guoren ;
Wu, Dapeng ;
Leung, Kin K. .
2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, :25-36
[28]   Joint power allocation and MCS selection for energy-efficient link adaptation: A deep reinforcement learning approach [J].
Parsa, Ali ;
Moghim, Neda ;
Salavati, Pouyan .
COMPUTER NETWORKS, 2022, 218
[29]   Energy-Efficient Virtual Network Embedding: A Deep Reinforcement Learning Approach Based on Graph Convolutional Networks [J].
Zhang, Peiying ;
Wang, Enqi ;
Luo, Zhihu ;
Bi, Yanxian ;
Liu, Kai ;
Wang, Jian .
ELECTRONICS, 2024, 13 (10)
[30]   Cloud-SEnergy: A bin-packing based multi-cloud service broker for energy efficient composition and execution of data-intensive applications [J].
Baker, Thar ;
Aldawsari, Bandar ;
Asim, Muhammad ;
Tawfik, Hissam ;
Maamar, Zakaria ;
Buyya, Rajkumar .
SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2018, 19 :242-252