Learning Feynman Diagrams with Tensor Trains

被引:29
|
作者
Fernandez, Yuriel Nunez [1 ]
Jeannin, Matthieu [1 ]
Dumitrescu, Philipp T. [2 ]
Kloss, Thomas [1 ,3 ]
Kaye, Jason [2 ,4 ]
Parcollet, Olivier [2 ,5 ]
Waintal, Xavier [1 ]
机构
[1] Univ Grenoble Alpes, CEA, Grenoble INP, IRIG,Pheliqs, F-38000 Grenoble, France
[2] Flatiron Inst, Ctr Computat Quantum Phys, 162 5th Ave, New York, NY 10010 USA
[3] Univ Grenoble Alpes, Inst Neel, CNRS, F-38000 Grenoble, France
[4] Flatiron Inst, Ctr Computat Math, 162 5th Ave, New York, NY 10010 USA
[5] Univ Paris Saclay, Inst Phys Theor, CNRS, CEA, F-91191 Gif Sur Yvette, France
关键词
QUANTUM; APPROXIMATION; MATRIX; QUASIOPTIMALITY;
D O I
10.1103/PhysRevX.12.041018
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
We use tensor network techniques to obtain high-order perturbative diagrammatic expansions for the quantum many-body problem at very high precision. The approach is based on a tensor train parsimonious representation of the sum of all Feynman diagrams, obtained in a controlled and accurate way with the tensor cross interpolation algorithm. It yields the full time evolution of physical quantities in the presence of any arbitrary time-dependent interaction. Our benchmarks on the Anderson quantum impurity problem, within the real-time nonequilibrium Schwinger-Keldysh formalism, demonstrate that this technique supersedes diagrammatic quantum Monte Carlo by orders of magnitude in precision and speed, with convergence rates 1/N2 or faster, where N is the number of function evaluations. The method also works in parameter regimes characterized by strongly oscillatory integrals in high dimension, which suffer from a catastrophic sign problem in quantum Monte Carlo calculations. Finally, we also present two exploratory studies showing that the technique generalizes to more complex situations: a double quantum dot and a single impurity embedded in a two-dimensional lattice.
引用
收藏
页数:30
相关论文
共 50 条
  • [41] Active Learning of Tree Tensor Networks using Optimal Least Squares
    Haberstich, Cecile
    Nouy, A.
    Perrin, G.
    SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2023, 11 (03) : 848 - 876
  • [42] Number-state preserving tensor networks as classifiers for supervised learning
    Evenbly, Glen
    FRONTIERS IN PHYSICS, 2022, 10
  • [43] Learning amore compact representation for low-rank tensor completion
    Li, Xi-Zhuo
    Jiang, Tai-Xiang
    Yang, Liqiao
    Liu, Guisong
    NEUROCOMPUTING, 2025, 617
  • [44] Correntropy-Induced Tensor Learning for Multi-view Subspace Clustering
    Chen, Yongyong
    Wang, Shuqin
    Su, Jingyong
    Chen, Junxin
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 897 - 902
  • [45] Presence and Absence of Barren Plateaus in Tensor-Network Based Machine Learning
    Liu, Zidu
    Yu, Li-Wei
    Duan, L. -M.
    Deng, Dong-Ling
    PHYSICAL REVIEW LETTERS, 2022, 129 (27)
  • [46] A Low-Rank Tensor Dictionary Learning Method for Hyperspectral Image Denoising
    Gong, Xiao
    Chen, Wei
    Chen, Jie
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 1168 - 1180
  • [47] Low-rank tensor ring learning for multi-linear regression
    Liu, Jiani
    Zhu, Ce
    Long, Zhen
    Huang, Huyan
    Liu, Yipeng
    PATTERN RECOGNITION, 2021, 113
  • [48] Learning Nonnegative Factors From Tensor Data: Probabilistic Modeling and Inference Algorithm
    Cheng, Lei
    Tong, Xueke
    Wang, Shuai
    Wu, Yik-Chung
    Poor, H. Vincent
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 1792 - 1806
  • [49] QUANTIZATION AND APPLICATION OF LOW-RANK TENSOR DECOMPOSITION BASED ON THE DEEP LEARNING MODEL
    Zhao, Jia
    3C TIC, 2023, 12 (01): : 330 - 350
  • [50] Low-Rank Tensor Learning with Discriminant Analysis for Action Classification and Image Recovery
    Jia, Chengcheng
    Zhong, Guoqiang
    Fu, Yun
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1228 - 1234