Skyline-Enhanced Deep Reinforcement Learning Approach for Energy-Efficient and QoS-Guaranteed Multi-Cloud Service Composition

被引:6
作者
Ma, Wenhao [1 ]
Xu, Hongzhen [1 ,2 ,3 ]
机构
[1] East China Univ Technol, Sch Informat Engn, Nanchang 330013, Peoples R China
[2] East China Univ Technol, Sch Software, Nanchang 330013, Peoples R China
[3] Jiangxi Key Lab Cybersecur Intelligent Percept, Nanchang 330013, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 11期
关键词
cloud service composition; multi-cloud; deep reinforcement learning; skyline; energy consumption; QoS aware; NEURAL-NETWORKS; ALGORITHM;
D O I
10.3390/app13116826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Cloud computing has experienced rapid growth in recent years and has become a critical computing paradigm. Combining multiple cloud services to satisfy complex user requirements has become a research hotspot in cloud computing. Service composition in multi-cloud environments is characterized by high energy consumption, which brings attention to the importance of energy consumption in cross-cloud service composition. Nonetheless, prior research has mainly focused on finding a service composition that maximizes the quality of service (QoS) and overlooks the energy consumption generated during service invocation. Additionally, the dynamic nature of multi-cloud environments challenges the adaptability and scalability of cloud service composition methods. Therefore, we propose the skyline-enhanced deep reinforcement learning approach (SkyDRL) to address these challenges. Our approach defines an energy consumption model for cloud service composition in multi-cloud environments. The branch and bound skyline algorithm is leveraged to reduce the search space and training time. Additionally, we enhance the basic deep Q-network (DQN) algorithm by incorporating double DQN to address the overestimation problem, incorporating Dueling Network and Prioritized Experience Replay to speed up training and improve stability. We evaluate our proposed method using comparative experiments with existing methods. Our results demonstrate that our approach effectively reduces energy consumption in cloud service composition while maintaining good adaptability and scalability in service composition problems. According to the experimental results, our approach outperforms the existing approaches by demonstrating energy savings ranging from 8% to 35%.
引用
收藏
页数:28
相关论文
共 46 条
[41]   3D UAV Trajectory Design and Frequency Band Allocation for Energy-Efficient and Fair Communication: A Deep Reinforcement Learning Approach [J].
Ding, Ruijin ;
Gao, Feifei ;
Shen, Xuemin .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (12) :7796-7809
[42]   Toward Energy-Efficient Dynamic Resource Allocation in Uplink NOMA Systems: Deep Reinforcement Learning for Single and Multi-Cell NOMA Systems [J].
Rabee, Ayman ;
Barhumi, Imad .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (06) :9313-9327
[43]   Reinforcement learning-based multi-objective energy-efficient task scheduling in fog-cloud industrial IoT-based systems [J].
Vijayalakshmi, V. ;
Saravanan, M. .
SOFT COMPUTING, 2023, 27 (23) :17473-17491
[44]   Energy-efficient computation offloading via deep reinforcement learning in mobility-aware multi-access edge computing systems with diverse users [J].
Wu, Haixing ;
Jin, Shunfu .
EXPERT SYSTEMS WITH APPLICATIONS, 2025, 286
[45]   Non-Cooperative Energy Efficient Power Allocation Game in D2D Communication: A Multi-Agent Deep Reinforcement Learning Approach [J].
Nguyen, Khoi Khac ;
Duong, Trung Q. ;
Vien, Go Anh ;
Le-Khac, Nhien-An ;
Minh-Nghia Nguyen .
IEEE ACCESS, 2019, 7 :100480-100490
[46]   Spectrum-Energy-Efficient Mode Selection and Resource Allocation for Heterogeneous V2X Networks: A Federated Multi-Agent Deep Reinforcement Learning Approach [J].
Gui, Jinsong ;
Lin, Liyan ;
Deng, Xiaoheng ;
Cai, Lin .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) :2689-2704