RANDOMIZED ALGORITHMS FOR ROUNDING IN THE TENSOR-TRAIN FORMAT

被引:14
作者
Al Daas, Hussam [1 ]
Ballard, Grey [2 ]
Cazeaux, Paul [3 ]
Hallman, Eric [4 ]
Miedlar, Agnieszka [3 ]
Pasha, Mirjeta [5 ]
Reid, Tim W. [4 ]
Saibaba, Arvind K. [4 ]
机构
[1] Rutherford Appleton Lab, Computat Math Grp, Didcot OX11 0QX, England
[2] Wake Forest Univ, Dept Comp Sci, Winston Salem, NC 27106 USA
[3] Virginia Tech, Dept Math, Blacksburg, VA 24061 USA
[4] North Carolina State Univ, Dept Math, Raleigh, NC 27607 USA
[5] Tufts Univ, Dept Math, Medford, MA 02155 USA
关键词
high-dimensional problems; randomized algorithms; tensor decompositions; tensortrain format; LINEAR-SYSTEMS; RANK APPROXIMATIONS; COMPUTATION; TUCKER; SVD;
D O I
10.1137/21M1451191
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The tensor-train (TT) format is a highly compact low-rank representation for highdimensional tensors. TT is particularly useful when representing approximations to the solutions of certain types of parametrized partial differential equations. For many of these problems, computing the solution explicitly would require an infeasible amount of memory and computational time. While the TT format makes these problems tractable, iterative techniques for solving the PDEs must be adapted to perform arithmetic while maintaining the implicit structure. The fundamental operation used to maintain feasible memory and computational time is called rounding, which truncates the internal ranks of a tensor already in TT format. We propose several randomized algorithms for this task that are generalizations of randomized low-rank matrix approximation algorithms and provide significant reduction in computation compared to deterministic TT-rounding algorithms. Randomization is particularly effective in the case of rounding a sum of TT-tensors (where we observe 20\times speedup), which is the bottleneck computation in the adaptation of GMRES to vectors in TT format. We present the randomized algorithms and compare their empirical accuracy and computational time with deterministic alternatives.
引用
收藏
页码:A74 / A95
页数:22
相关论文
共 53 条
[11]  
Cichocki A, 2016, FOUND TRENDS MACH LE, V9, P431, DOI [10.1561/2200000059, 10.1561/2200000067]
[12]   Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Part 1 Low-Rank Tensor Decompositions [J].
Cichocki, Andrzej ;
Lee, Namgil ;
Oseledets, Ivan ;
Anh-Huy Phan ;
Zhao, Qibin ;
Mandic, Danilo P. .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2016, 9 (4-5) :I-+
[13]  
Cohen N, 2016, C LEARN THEOR, V49, P698, DOI 10.48550/arXiv.1509.05009
[14]   Tensor-Sparsity of Solutions to High-Dimensional Elliptic Partial Differential Equations [J].
Dahmen, Wolfgang ;
DeVore, Ronald ;
Grasedyck, Lars ;
Suli, Endre .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2016, 16 (04) :813-874
[15]   TENSOR RANK AND THE ILL-POSEDNESS OF THE BEST LOW-RANK APPROXIMATION PROBLEM [J].
de Silva, Vin ;
Lim, Lek-Heng .
SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2008, 30 (03) :1084-1127
[16]   TT-GMRES: solution to a linear system in the structured tensor format [J].
Dolgov, S. V. .
RUSSIAN JOURNAL OF NUMERICAL ANALYSIS AND MATHEMATICAL MODELLING, 2013, 28 (02) :149-172
[17]   FAST SOLUTION OF PARABOLIC PROBLEMS IN THE TENSOR TRAIN/QUANTIZED TENSOR TRAIN FORMAT WITH INITIAL APPLICATION TO THE FOKKER-PLANCK EQUATION [J].
Dolgov, S. V. ;
Khoromskij, B. N. ;
Oseledets, I. V. .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2012, 34 (06) :A3016-A3038
[18]   Parallel cross interpolation for high-precision calculation of high-dimensional integrals [J].
Dolgov, Sergey ;
Savostyanov, Dmitry .
COMPUTER PHYSICS COMMUNICATIONS, 2020, 246
[19]   ALTERNATING MINIMAL ENERGY METHODS FOR LINEAR SYSTEMS IN HIGHER DIMENSIONS [J].
Dolgov, Sergey V. ;
Savostyanov, Dmitry V. .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2014, 36 (05) :A2248-A2271
[20]  
Duvenaudt D, 2015, ADV NEUR IN, V28