A Multilevel Subtree Method for Single and Batched Sparse Cholesky Factorization

被引:1
作者
Tang, Meng [1 ]
Gadou, Mohamed [1 ]
Rennich, Steven C. [2 ]
Davis, Timothy A. [3 ]
Ranka, Sanjay [1 ]
机构
[1] Univ Florida, CISE, Gainesville, FL 32611 USA
[2] NVIDIA, Santa Clara, CA USA
[3] Texas A&M Univ, CSE, College Stn, TX 77843 USA
来源
PROCEEDINGS OF THE 47TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING | 2018年
基金
美国国家科学基金会;
关键词
sparse matrices; sparse direct methods; Cholesky factorization; GPU; CUDA;
D O I
10.1145/3225058.3225090
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Scientific computing relies heavily on matrix factorization. Cholesky factorization is typically used to solve the linear equation system Ax = b where A is symmetric and positive definite. A large number of applications require operating on sparse matrices. A major overhead with factorization of sparse matrices on GPUs is addressing the cost of transferring the data from the CPU to the GPU. Additionally, the computational efficiency of factorization of small dense matrices has to be addressed. In this paper, we develop a multilevel subtree method for Cholesky factorization of large sparse matrices on single and multiple GPUs. This approach effectively addresses two important limitations of previous methods. First, by applying the subtree method to both lower levels and higher levels of the elimination tree, we improve the amount of concurrency and the computational efficiency. Previous approaches only used the subtree method at the lower levels. Second, we overlap computation of a subtree with another subtree, thereby reducing the overhead of the data transfer from CPU to GPU. Additionally, we propose the use of batched parallelism for applications that require simultaneous factorization of multiple matrices. Effectively, the tree structure of a collection of matrices can be derived by merging the individual trees. Our experimental results show that each of the three techniques result in significant performance improvement. Further, the combination of the three can result in a speedup of up to 2.43 on a variety of sparse matrices.
引用
收藏
页数:10
相关论文
empty
未找到相关数据