Skyline-Enhanced Deep Reinforcement Learning Approach for Energy-Efficient and QoS-Guaranteed Multi-Cloud Service Composition

被引:6
作者
Ma, Wenhao [1 ]
Xu, Hongzhen [1 ,2 ,3 ]
机构
[1] East China Univ Technol, Sch Informat Engn, Nanchang 330013, Peoples R China
[2] East China Univ Technol, Sch Software, Nanchang 330013, Peoples R China
[3] Jiangxi Key Lab Cybersecur Intelligent Percept, Nanchang 330013, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 11期
关键词
cloud service composition; multi-cloud; deep reinforcement learning; skyline; energy consumption; QoS aware; NEURAL-NETWORKS; ALGORITHM;
D O I
10.3390/app13116826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Cloud computing has experienced rapid growth in recent years and has become a critical computing paradigm. Combining multiple cloud services to satisfy complex user requirements has become a research hotspot in cloud computing. Service composition in multi-cloud environments is characterized by high energy consumption, which brings attention to the importance of energy consumption in cross-cloud service composition. Nonetheless, prior research has mainly focused on finding a service composition that maximizes the quality of service (QoS) and overlooks the energy consumption generated during service invocation. Additionally, the dynamic nature of multi-cloud environments challenges the adaptability and scalability of cloud service composition methods. Therefore, we propose the skyline-enhanced deep reinforcement learning approach (SkyDRL) to address these challenges. Our approach defines an energy consumption model for cloud service composition in multi-cloud environments. The branch and bound skyline algorithm is leveraged to reduce the search space and training time. Additionally, we enhance the basic deep Q-network (DQN) algorithm by incorporating double DQN to address the overestimation problem, incorporating Dueling Network and Prioritized Experience Replay to speed up training and improve stability. We evaluate our proposed method using comparative experiments with existing methods. Our results demonstrate that our approach effectively reduces energy consumption in cloud service composition while maintaining good adaptability and scalability in service composition problems. According to the experimental results, our approach outperforms the existing approaches by demonstrating energy savings ranging from 8% to 35%.
引用
收藏
页数:28
相关论文
共 46 条
[31]   A deep multi-agent reinforcement learning approach for the micro-service migration problem with affinity in the cloud [J].
Ma, Ning ;
Tang, Angjun ;
Xiong, Zifeng ;
Jiang, Fuxin .
EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273
[32]   Energy-efficient collaborative task offloading in multi-access edge computing based on deep reinforcement learning [J].
Wang, Shudong ;
Zhao, Shengzhe ;
Gui, Haiyuan ;
He, Xiao ;
Lu, Zhi ;
Chen, Baoyun ;
Fan, Zixuan ;
Pang, Shanchen .
AD HOC NETWORKS, 2025, 169
[33]   Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach [J].
Shi, Shuo ;
Wang, Meng ;
Gu, Shushi ;
Zheng, Zhong .
WIRELESS NETWORKS, 2024, 30 (05) :3921-3934
[34]   Energy-Efficient Multi-Agent Deep Reinforcement Learning Task Offloading and Resource Allocation for UAV Edge Computing [J].
Xu, Shu ;
Liu, Qingjie ;
Gong, Chengye ;
Wen, Xupeng .
SENSORS, 2025, 25 (11)
[35]   Energy-efficient multi-pass cutting parameters optimisation for aviation parts in flank milling with deep reinforcement learning [J].
Lu, Fengyi ;
Zhou, Guanghui ;
Zhang, Chao ;
Liu, Yang ;
Chang, Fengtian ;
Xiao, Zhongdong .
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 81
[36]   Energy Efficient Joint Computation Offloading and Service Caching for Mobile Edge Computing: A Deep Reinforcement Learning Approach [J].
Zhou, Huan ;
Zhang, Zhenyu ;
Wu, Yuan ;
Dong, Mianxiong ;
Leung, Victor C. M. .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (02) :950-961
[37]   3D Autonomous Navigation of UAVs: An Energy-Efficient and Collision-Free Deep Reinforcement Learning Approach [J].
Wang, Yubin ;
Biswas, Karnika ;
Zhang, Liwen ;
Ghazzai, Hakim ;
Massoud, Yehia .
2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, :404-408
[38]   Matching-Driven Deep Reinforcement Learning for Energy-Efficient Transmission Parameter Allocation in Multi-Gateway LoRa Networks [J].
Lin, Ziqi ;
Zhang, Xu ;
Gong, Shimin ;
Li, Lanhua ;
Su, Zhou ;
Gu, Bo .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) :3283-3295
[39]   A novel deep reinforcement learning-based algorithm for multi-objective energy-efficient flow-shop scheduling [J].
Liang, Peng ;
Xiao, Pengfei ;
Li, Zeya ;
Luo, Min ;
Zhang, Chaoyong .
IET COLLABORATIVE INTELLIGENT MANUFACTURING, 2024, 6 (04)
[40]   Energy Efficient 3-D UAV Control for Persistent Communication Service and Fairness: A Deep Reinforcement Learning Approach [J].
Qi, Hang ;
Hu, Zhiqun ;
Huang, Hao ;
Wen, Xiangming ;
Lu, Zhaoming .
IEEE ACCESS, 2020, 8 :53172-53184