An Optimized Framework for Matrix Factorization on the New Sunway Many-core Platform

被引:0
作者
Ma, Wenjing [1 ,2 ]
Liu, Fangfang [1 ,2 ]
Chen, Daokun [1 ,3 ]
Lu, Qinglin [1 ,3 ]
Hu, Yi [1 ,3 ]
Wang, Hongsen [1 ,3 ]
Yuan, Xinhui [4 ]
机构
[1] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
[2] State Key Lab Comp Sci, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[4] Natl Res Ctr Parallel Comp Engn & Technol, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Matrix factorization; LAPACK; manycore processors; SW26010Pro; optimization; parallelization; LINEAR ALGEBRA;
D O I
10.1145/3571856
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Matrix factorization functions are used in many areas and often play an important role in the overall performance of the applications. In the LAPACK library, matrix factorization functions are implemented with blocked factorization algorithm, shifting most of the workload to the high-performance Level-3 BLAS functions. But the non-blocked part, the panel factorization, becomes the performance bottleneck, especially for small- and medium-size matrices that are the common cases in many real applications. On the new Sunway many-core platform, the performance bottleneck of panel factorization can be alleviated by keeping the panel in the LDM for the panel factorization. Therefore, we propose a new framework for implementing matrix factorization functions on the new Sunway many-core platform, facilitating the in-LDM panel factorization. The framework provides a template class with wrapper functions, which integrates inter-CPE communication for the Level-1 and Level-2 BLAS functions with flexible interfaces and can accommodate different partitioning schemes. With the framework, writing panel factorization code with data residing in the LDM space can be done with much higher productivity. We implemented three functions (d.etr f, d.eqr f, and dpotr f) based on the framework and compared our work with aCPE_BLAS version, which uses the original LAPACK implementation linked with optimized BLAS library that runs on the CPE mesh. Using the most favorable partitioning, the panel factorization part achieves speedup of up to 26.3, 19.1, and 18.2 for the three matrix factorization functions. For the whole function, our implementation is based on a carefully tuned recursion framework, and we added specific optimization to some subroutines used in the factorization functions. Overall, we obtained average speedup of 9.76 on d.etr f, 10.12 on d.eqr f, and 4.16 on dpotr f, compared to the CPE_BLAS version. Based on the current template class, our work can be extended to support more categories of linear algebra functions.
引用
收藏
页数:24
相关论文
共 39 条
  • [1] Batch QR Factorization on GPUs: Design, Optimization, and Tuning
    Abdelfattah, Ahmad
    Tomov, Stan
    Dongarra, Jack
    [J]. COMPUTATIONAL SCIENCE - ICCS 2022, PT I, 2022, : 60 - 74
  • [2] Abdelfattah A, 2019, IEEE HIGH PERF EXTR
  • [3] Factorization and Inversion of a Million Matrices using GPUs: Challenges and Countermeasures
    Abdelfattah, Ahmad
    Haidar, Azzam
    Tomov, Stanimire
    Dongarra, Jack
    [J]. INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE (ICCS 2017), 2017, 108 : 606 - 615
  • [4] Fast Cholesky factorization on GPUs for batch and native modes in MAGMA
    Abdelfattah, Ahmad
    Haidar, Azzam
    Tomov, Stanimire
    Dongarra, Jack
    [J]. JOURNAL OF COMPUTATIONAL SCIENCE, 2017, 20 : 85 - 93
  • [5] Agullo E., 2011, Proceedings of the 25th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2011), P932, DOI 10.1109/IPDPS.2011.90
  • [6] MAGMA templates for scalable linear algebra on emerging architectures
    Al Farhan, Mohammed
    Abdelfattah, Ahmad
    Tomov, Stanimire
    Gates, Mark
    Sukkari, Dalal
    Haidar, Azzam
    Rosenberg, Robert
    Dongarra, Jack
    [J]. INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2020, 34 (06) : 645 - 658
  • [7] Anderson E., 1999, LAPACK USERS GUIDE
  • [8] Bo KS, 2006, LECT NOTES COMPUT SC, V3732, P21
  • [9] A class of parallel tiled linear algebra algorithms for multicore architectures
    Buttari, Alfredo
    Langou, Julien
    Kurzak, Jakub
    Dongarra, Jack
    [J]. PARALLEL COMPUTING, 2009, 35 (01) : 38 - 53
  • [10] Scaling LAPACK Panel Operations Using Parallel Cache Assignment
    Castaldo, Anthony M.
    Whaley, R. Clint
    Samuel, Siju
    [J]. ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2013, 39 (04):