Block size estimation for data partitioning in HPC applications using machine learning techniques

被引:0
作者
Cantini, Riccardo [1 ]
Marozzo, Fabrizio [1 ,2 ]
Orsino, Alessio [1 ]
Talia, Domenico [1 ,2 ]
Trunfio, Paolo [1 ,2 ]
Badia, Rosa M. [3 ]
Ejarque, Jorge [3 ]
Vazquez-Novoa, Fernando [3 ]
机构
[1] Univ Calabria, Arcavacata Di Rende, Italy
[2] Dtok Lab SRL, Arcavacata Di Rende, Italy
[3] Barcelona Supercomp Ctr, Barcelona, Spain
基金
欧盟地平线“2020”;
关键词
Data partitioning; High performance computing; Data-parallel applications; Machine learning; Big data;
D O I
10.1186/s40537-023-00862-w
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The extensive use of HPC infrastructures and frameworks for running data-intensive applications has led to a growing interest in data partitioning techniques and strategies. In fact, application performance can be heavily affected by how data are partitioned, which in turn depends on the selected size for data blocks, i.e. the block size. Therefore, finding an effective partitioning, i.e. a suitable block size, is a key strategy to speed-up parallel data-intensive applications and increase scalability. This paper describes a methodology, namely BLEST-ML (BLock size ESTimation through Machine Learning), for block size estimation that relies on supervised machine learning techniques. The proposed methodology was evaluated by designing an implementation tailored to dislib, a distributed computing library highly focused on machine learning algorithms built on top of the PyCOMPSs framework. We assessed the effectiveness of the provided implementation through an extensive experimental evaluation considering different algorithms from dislib, datasets, and infrastructures, including the MareNostrum 4 supercomputer. The results we obtained show the ability of BLEST-ML to efficiently determine a suitable way to split a given dataset, thus providing a proof of its applicability to enable the efficient execution of data-parallel applications in high performance environments.
引用
收藏
页数:23
相关论文
共 34 条
  • [11] Carver Benjamin, 2020, SoCC '20: Proceedings of the 11th ACM Symposium on Cloud Computing, P1, DOI 10.1145/3419111.3421286
  • [12] HybSMRP: a hybrid scheduling algorithm in Hadoop MapReduce framework
    Gandomi, Abolfazl
    Reshadi, Midia
    Movaghar, Ali
    Khademzadeh, Ahmad
    [J]. JOURNAL OF BIG DATA, 2019, 6 (01)
  • [13] A Data-Aware Scheduling Strategy for Executing Large-Scale Distributed Workflows
    Giampa, Salvatore
    Belcastro, Loris
    Marozzo, Fabrizio
    Talia, Domenico
    Trunfio, Paolo
    [J]. IEEE ACCESS, 2021, 9 : 47354 - 47364
  • [14] hadoop.apache, Apache Hadoop
  • [15] A data-driven statistical model for predicting the critical temperature of a superconductor
    Hamidieh, Kam
    [J]. COMPUTATIONAL MATERIALS SCIENCE, 2018, 154 : 346 - 354
  • [16] Gradient-based learning applied to document recognition
    Lecun, Y
    Bottou, L
    Bengio, Y
    Haffner, P
    [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324
  • [17] ServiceSs: An Interoperable Programming Framework for the Cloud
    Lordan, Francesc
    Tejedor, Enric
    Ejarque, Jorge
    Rafanell, Roger
    Alvarez, Javier
    Marozzo, Fabrizio
    Lezzi, Daniele
    Sirvent, Rauel
    Talia, Domenico
    Badia, Rosa M.
    [J]. JOURNAL OF GRID COMPUTING, 2014, 12 (01) : 67 - 91
  • [18] A Survey of Data Partitioning and Sampling Methods to Support Big Data Analysis
    Mahmud, Mohammad Sultan
    Huang, Joshua Zhexue
    Salloum, Salman
    Emara, Tamer Z.
    Sadatdiynov, Kuanishbay
    [J]. BIG DATA MINING AND ANALYTICS, 2020, 3 (02) : 85 - 101
  • [19] Mariani G, 2015, P 12 ACM INT C COMP, P1
  • [20] Marozzo F, 2013, P 8 WORKSHOP WORKFLO, P124