Sparsity-Aware Tensor Decomposition

被引:3
作者
Kurt, Sureyya Emre [1 ]
Raje, Saurabh [1 ]
Sukumaran-Rajam, Aravind [2 ]
Sadayappan, P. [1 ]
机构
[1] Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA
[2] Washington State Univ, Dept EECS, Pullman, WA 99164 USA
来源
2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2022) | 2022年
基金
美国国家科学基金会;
关键词
CPD; MTTKRP; sparse tensor factorization;
D O I
10.1109/IPDPS53621.2022.00097
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse tensor decomposition, such as Canonical Polyadic Decomposition (CPD), is a key operation for data analytics and machine learning. Its computation is dominated by a set of MTTKRP (Matricized Tensor Times Khatri Rao Product) operations. The collection of required MTTKRP operations for sparse CPD include common sub-computations across them and many approaches exist to factorize and reuse common sub-expressions. Prior work on sparse CPD has focused on minimizing the number of high-level operators. In this paper, we consider a design space that covers whether the partial MTTKRP results should be saved, different mode permutations and model the total volume of data movement to/from memory. Also, we propose a fine-grained load balancing method that supports higher levels of parallelization.
引用
收藏
页码:952 / 962
页数:11
相关论文
共 21 条
[1]  
[Anonymous], 2016, RR89 INR RES CTR GRE
[2]   Communication Lower Bounds for Matricized Tensor Times Khatri-Rao Product [J].
Ballard, Grey ;
Rouse, Kathryn ;
Knight, Nicholas .
2018 32ND IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2018, :557-567
[3]  
Baskaran M, 2012, IEEE HIGH PERF EXTR
[4]   On Optimizing Distributed Non-negative Tucker Decomposition [J].
Chakaravarthy, Venkatesan T. ;
Pandian, Shivmaran S. ;
Raje, Saurabh ;
Sabharwal, Yogish .
INTERNATIONAL CONFERENCE ON SUPERCOMPUTING (ICS 2019), 2019, :238-249
[5]   Automatic Generation of Efficient Sparse Tensor Format Conversion Routines [J].
Chou, Stephen ;
Kjolstad, Fredrik ;
Amarasinghe, Saman .
PROCEEDINGS OF THE 41ST ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '20), 2020, :823-838
[6]   PLANC: Parallel Low-rank Approximation with Nonnegativity Constraints [J].
Eswar, Srinivas ;
Hayashi, Koby ;
Ballard, Grey ;
Kannan, Ramakrishnan ;
Matheson, Michael A. ;
Park, Haesun .
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2021, 47 (03)
[7]   ALTO: Adaptive Linearized Storage of Sparse Tensors [J].
Helal, Ahmed E. ;
Laukemann, Jan ;
Checconi, Fabio ;
Tithi, Jesmin Jahan ;
Ranadive, Teresa ;
Petrini, Fabrizio ;
Choi, Jeewhan .
PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, :404-416
[8]  
Jeon I, 2015, PROC INT CONF DATA, P1047, DOI 10.1109/ICDE.2015.7113355
[9]   Scalable Sparse Tensor Decompositions in Distributed Memory Systems [J].
Kaya, Oguz ;
Ucar, Bora .
PROCEEDINGS OF SC15: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2015,
[10]   Tensor Decompositions and Applications [J].
Kolda, Tamara G. ;
Bader, Brett W. .
SIAM REVIEW, 2009, 51 (03) :455-500