Dynamic Group Learning Distributed Particle Swarm Optimization for Large-Scale Optimization and Its Application in Cloud Workflow Scheduling

被引:174
作者
Wang, Zi-Jia [1 ]
Zhan, Zhi-Hui [2 ,3 ,4 ]
Yu, Wei-Jie [5 ]
Lin, Ying [6 ]
Zhang, Jie [7 ]
Gu, Tian-Long [8 ]
Zhang, Jun [9 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510006, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[3] South China Univ Technol, State Key Lab Subtrop Bldg Sci, Guangzhou 510006, Peoples R China
[4] Guangdong Prov Key Lab Computat Intelligence & Cy, Guangzhou 510006, Peoples R China
[5] Sun Yat Sen Univ, Sch Informat Management, Guangzhou 510006, Peoples R China
[6] Sun Yat Sen Univ, Dept Psychol, Guangzhou 510006, Peoples R China
[7] Beijing Univ Chem Technol, Sch Informat Sci & Technol, Beijing 100029, Peoples R China
[8] Guilin Univ Elect Technol, Sch Comp Sci & Engn, Guilin 541004, Peoples R China
[9] Victoria Univ, Melbourne, Vic 8001, Australia
基金
中国国家自然科学基金;
关键词
Cloud computing; Task analysis; Optimization; Sociology; Statistics; Processor scheduling; Dynamic scheduling; Adaptive renumber strategy (ARS); dynamic group learning distributed particle swarm optimization (DGLDPSO); dynamic group learning strategy; large-scale cloud workflow scheduling; master-slave multigroup distributed; COOPERATIVE COEVOLUTION; ALGORITHM; STRATEGY;
D O I
10.1109/TCYB.2019.2933499
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cloud workflow scheduling is a significant topic in both commercial and industrial applications. However, the growing scale of workflow has made such a scheduling problem increasingly challenging. Many current algorithms often deal with small- or medium-scale problems (e.g., less than 1000 tasks) and face difficulties in providing satisfactory solutions when dealing with the large-scale problems, due to the curse of dimensionality. To this aim, this article proposes a dynamic group learning distributed particle swarm optimization (DGLDPSO) for large-scale optimization and extends it for the large-scale cloud workflow scheduling. DGLDPSO is efficient for large-scale optimization due to its following two advantages. First, the entire population is divided into many groups, and these groups are coevolved by using the master-slave multigroup distributed model, forming a distributed PSO (DPSO) to enhance the algorithm diversity. Second, a dynamic group learning (DGL) strategy is adopted for DPSO to balance diversity and convergence. When applied DGLDPSO into the large-scale cloud workflow scheduling, an adaptive renumber strategy (ARS) is further developed to make solutions relate to the resource characteristic and to make the searching behavior meaningful rather than aimless. Experiments are conducted on the large-scale benchmark functions set and the large-scale cloud workflow scheduling instances to further investigate the performance of DGLDPSO. The comparison results show that DGLDPSO is better than or at least comparable to other state-of-the-art large-scale optimization algorithms and workflow scheduling algorithms.
引用
收藏
页码:2715 / 2729
页数:15
相关论文
共 50 条
  • [1] Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization
    Wang, Zi-Jia
    Zhan, Zhi-Hui
    Kwong, Sam
    Jin, Hu
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) : 1175 - 1188
  • [2] Cooperative Particle Swarm Optimization With a Bilevel Resource Allocation Mechanism for Large-Scale Dynamic Optimization
    Liu, Xiao-Fang
    Zhang, Jun
    Wang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (02) : 1000 - 1011
  • [3] Superiority combination learning distributed particle swarm optimization for large-scale optimization
    Wang, Zi-Jia
    Yang, Qiang
    Zhang, Yu -Hui
    Chen, Shu-Hong
    Wang, Yuan -Gen
    APPLIED SOFT COMPUTING, 2023, 136
  • [4] Transfer-Based Particle Swarm Optimization for Large-Scale Dynamic Optimization With Changing Variable Interactions
    Liu, Xiao-Fang
    Zhan, Zhi-Hui
    Zhang, Jun
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2024, 28 (06) : 1633 - 1643
  • [5] A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization
    Yang, Qiang
    Chen, Wei-Neng
    Gu, Tianlong
    Zhang, Huaxiang
    Yuan, Huaqiang
    Kwong, Sam
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (07) : 3393 - 3408
  • [6] Bi-directional learning particle swarm optimization for large-scale optimization
    Liu, Shuai
    Wang, Zi-Jia
    Wang, Yuan-Gen
    Kwong, Sam
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2023, 149
  • [7] Heterogeneous cognitive learning particle swarm optimization for large-scale optimization problems
    Zhang, En
    Nie, Zihao
    Yang, Qiang
    Wang, Yiqiao
    Liu, Dong
    Jeon, Sang-Woon
    Zhang, Jun
    INFORMATION SCIENCES, 2023, 633 : 321 - 342
  • [8] A sinusoidal social learning swarm optimizer for large-scale optimization
    Liu, Nengxian
    Pan, Jeng-Shyang
    Chu, Shu-Chuan
    Hu, Pei
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [9] Particle swarm optimization based workflow scheduling for medical applications in cloud
    Prathibha, Soma
    Latha, B.
    Suamthi, G.
    BIOMEDICAL RESEARCH-INDIA, 2017, 28
  • [10] Incremental particle swarm optimization for large-scale dynamic optimization with changing variable interactions
    Liu, Xiao-Fang
    Zhan, Zhi-Hui
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2023, 141