Block size estimation for data partitioning in HPC applications using machine learning techniques

被引:0
作者
Cantini, Riccardo [1 ]
Marozzo, Fabrizio [1 ,2 ]
Orsino, Alessio [1 ]
Talia, Domenico [1 ,2 ]
Trunfio, Paolo [1 ,2 ]
Badia, Rosa M. [3 ]
Ejarque, Jorge [3 ]
Vazquez-Novoa, Fernando [3 ]
机构
[1] Univ Calabria, Arcavacata Di Rende, Italy
[2] Dtok Lab SRL, Arcavacata Di Rende, Italy
[3] Barcelona Supercomp Ctr, Barcelona, Spain
基金
欧盟地平线“2020”;
关键词
Data partitioning; High performance computing; Data-parallel applications; Machine learning; Big data;
D O I
10.1186/s40537-023-00862-w
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The extensive use of HPC infrastructures and frameworks for running data-intensive applications has led to a growing interest in data partitioning techniques and strategies. In fact, application performance can be heavily affected by how data are partitioned, which in turn depends on the selected size for data blocks, i.e. the block size. Therefore, finding an effective partitioning, i.e. a suitable block size, is a key strategy to speed-up parallel data-intensive applications and increase scalability. This paper describes a methodology, namely BLEST-ML (BLock size ESTimation through Machine Learning), for block size estimation that relies on supervised machine learning techniques. The proposed methodology was evaluated by designing an implementation tailored to dislib, a distributed computing library highly focused on machine learning algorithms built on top of the PyCOMPSs framework. We assessed the effectiveness of the provided implementation through an extensive experimental evaluation considering different algorithms from dislib, datasets, and infrastructures, including the MareNostrum 4 supercomputer. The results we obtained show the ability of BLEST-ML to efficiently determine a suitable way to split a given dataset, thus providing a proof of its applicability to enable the efficient execution of data-parallel applications in high performance environments.
引用
收藏
页数:23
相关论文
共 34 条
  • [1] Gromacs: High performance molecular simulations through multi-level parallelism from laptops to supercomputers
    Abraham, Mark James
    Murtola, Teemu
    Schulz, Roland
    Páll, Szilárd
    Smith, Jeremy C.
    Hess, Berk
    Lindah, Erik
    [J]. SoftwareX, 2015, 1-2 : 19 - 25
  • [2] Hybrid Row-Column Partitioning in Teradata (R)
    Al-Kateb, Mohammed
    Sinclair, Paul
    Au, Grace
    Ballinger, Carrie
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2016, 9 (13): : 1353 - 1364
  • [3] Alvarez Cid-Fuentes Javier, 2019, 2019 15th International Conference on eScience (eScience). Proceedings, P96, DOI 10.1109/eScience.2019.00018
  • [4] [Anonymous], 2019, Apache Spark
  • [5] OpenTuner: An Extensible Framework for Program Autotuning
    Ansel, Jason
    Kamil, Shoaib
    Veeramachaneni, Kalyan
    Ragan-Kelley, Jonathan
    Bosboom, Jeffrey
    O'Reilly, Una-May
    Amarasinghe, Saman
    [J]. PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT'14), 2014, : 303 - 315
  • [6] Baldi P, 2016, Arxiv, DOI arXiv:1601.07913
  • [7] Barcelona Supercomputing Center (BSC), MareNostrum IV Technical Information
  • [8] Programming big data analysis: principles and solutions
    Belcastro, Loris
    Cantini, Riccardo
    Marozzo, Fabrizio
    Orsino, Alessio
    Talia, Domenico
    Trunfio, Paolo
    [J]. JOURNAL OF BIG DATA, 2022, 9 (01)
  • [9] Static and Dynamic Big Data Partitioning on Apache Spark
    Bertolucci, Massimiliano
    Carlini, Emanuele
    Dazzi, Patrizio
    Lulli, Alessandro
    Ricci, Laura
    [J]. PARALLEL COMPUTING: ON THE ROAD TO EXASCALE, 2016, 27 : 489 - 498
  • [10] Exploiting Machine Learning for Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines
    Cantini, Riccardo
    Marozzo, Fabrizio
    Orsino, Alessio
    Talia, Domenico
    Trunfio, Paolo
    [J]. FUTURE INTERNET, 2021, 13 (05)