Approximation and sampling of multivariate probability distributions in the tensor train decomposition

被引:0
作者
Sergey Dolgov
Karim Anaya-Izquierdo
Colin Fox
Robert Scheichl
机构
[1] University of Bath,
[2] University of Otago,undefined
[3] University of Heidelberg,undefined
来源
Statistics and Computing | 2020年 / 30卷
关键词
Multivariate distributions; Surrogate models; Tensor decomposition; MCMC; Importance weights;
D O I
暂无
中图分类号
学科分类号
摘要
General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. This paper develops a sampler for arbitrary continuous multivariate distributions that is based on low-rank surrogates in the tensor train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry. We build upon recent developments of the cross approximation algorithms in linear algebra to construct a tensor train approximation to the target probability density function using a small number of function evaluations. For sufficiently smooth distributions, the storage required for accurate tensor train approximations is moderate, scaling linearly with dimension. In turn, the structure of the tensor train surrogate allows sampling by an efficient conditional distribution method since marginal distributions are computable with linear complexity in dimension. Expected values of non-smooth quantities of interest, with respect to the surrogate distribution, can be estimated using transformed independent uniformly-random seeds that provide Monte Carlo quadrature or transformed points from a quasi-Monte Carlo lattice to give more efficient quasi-Monte Carlo quadrature. Unbiased estimates may be calculated by correcting the transformed random seeds using a Metropolis–Hastings accept/reject step, while the quasi-Monte Carlo quadrature may be corrected either by a control-variate strategy or by importance weighting. We show that the error in the tensor train approximation propagates linearly into the Metropolis–Hastings rejection rate and the integrated autocorrelation time of the resulting Markov chain; thus, the integrated autocorrelation time may be made arbitrarily close to 1, implying that, asymptotic in sample size, the cost per effectively independent sample is one target density evaluation plus the cheap tensor train surrogate proposal that has linear cost with dimension. These methods are demonstrated in three computed examples: fitting failure time of shock absorbers; a PDE-constrained inverse diffusion problem; and sampling from the Rosenbrock distribution. The delayed rejection adaptive Metropolis (DRAM) algorithm is used as a benchmark. In all computed examples, the importance weight-corrected quasi-Monte Carlo quadrature performs best and is more efficient than DRAM by orders of magnitude across a wide range of approximation accuracies and sample sizes. Indeed, all the methods developed here significantly outperform DRAM in all computed examples.
引用
收藏
页码:603 / 625
页数:22
相关论文
共 36 条
  • [21] PRACTICAL LEVERAGE-BASED SAMPLING FOR LOW-RANK TENSOR DECOMPOSITION
    Larsen, Brett W.
    Kolda, Tamara G.
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2022, 43 (03) : 1488 - 1517
  • [22] Block-Decoupling Multivariate Polynomials Using the Tensor Block-Term Decomposition
    Dreesen, Philippe
    Goossens, Thomas
    Ishteva, Mariya
    De Lathauwer, Lieven
    Schoukens, Johan
    LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION, LVA/ICA 2015, 2015, 9237 : 14 - 21
  • [23] Autonomous Binarized Focal Loss Enhanced Model Compression Design Using Tensor Train Decomposition
    Liu, Mingshuo
    Luo, Shiyi
    Han, Kevin
    DeMara, Ronald F.
    Bai, Yu
    MICROMACHINES, 2022, 13 (10)
  • [24] Towards efficient and accurate approximation: tensor decomposition based on randomized block Krylov iteration
    Qiu, Yichun
    Sun, Weijun
    Zhou, Guoxu
    Zhao, Qibin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (8-9) : 6287 - 6297
  • [25] DATA-DRIVEN TENSOR TRAIN GRADIENT CROSS APPROXIMATION FOR HAMILTON-JACOBI-BELLMAN EQUATIONS
    Dolgov, Sergey
    Kalise, Dante
    Saluzzi, Luca
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2023, 45 (05) : A2153 - A2184
  • [26] Support Vector Machine based on Low-rank Tensor Train Decomposition for Big Data Applications
    Wang, Yongkang
    Zhang, Weicheng
    Yu, Zhuliang
    Gu, Zhenghui
    Liu, Hai
    Cai, Zhaoquan
    Wang, Congjun
    Gao, Shihan
    PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, : 850 - 853
  • [27] Detecting Early Parkinson's Disease from Keystroke Dynamics using the Tensor-Train Decomposition
    Oroojeni, Hooman M. J.
    Oldfield, James
    Nicolaou, Mihalis A.
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [28] A novel recursive least-squares adaptive method for streaming tensor-train decomposition with incomplete observations
    Trung, Le Thanh
    Abed-Meraim, Karim
    Trung, Nguyen Linh
    Hafiane, Adel
    SIGNAL PROCESSING, 2024, 216
  • [29] HUMAN INTERACTION RECOGNITION USING LOW-RANK MATRIX APPROXIMATION AND SUPER DESCRIPTOR TENSOR DECOMPOSITION
    Khokher, Muhammad Rizwan
    Bouzerdoum, Abdesselam
    Son Lam Phung
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 1847 - 1851
  • [30] A non-local grouping tensor train decomposition model for travel demand analysis concerning categorical independent variables
    Zhu, Zheng
    Xu, Meng
    Wang, Kehua
    Lei, Chenyuan
    Xia, Yingji
    Chen, Xiqun
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 157