Learning Feynman Diagrams with Tensor Trains

被引:29
|
作者
Fernandez, Yuriel Nunez [1 ]
Jeannin, Matthieu [1 ]
Dumitrescu, Philipp T. [2 ]
Kloss, Thomas [1 ,3 ]
Kaye, Jason [2 ,4 ]
Parcollet, Olivier [2 ,5 ]
Waintal, Xavier [1 ]
机构
[1] Univ Grenoble Alpes, CEA, Grenoble INP, IRIG,Pheliqs, F-38000 Grenoble, France
[2] Flatiron Inst, Ctr Computat Quantum Phys, 162 5th Ave, New York, NY 10010 USA
[3] Univ Grenoble Alpes, Inst Neel, CNRS, F-38000 Grenoble, France
[4] Flatiron Inst, Ctr Computat Math, 162 5th Ave, New York, NY 10010 USA
[5] Univ Paris Saclay, Inst Phys Theor, CNRS, CEA, F-91191 Gif Sur Yvette, France
关键词
QUANTUM; APPROXIMATION; MATRIX; QUASIOPTIMALITY;
D O I
10.1103/PhysRevX.12.041018
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
We use tensor network techniques to obtain high-order perturbative diagrammatic expansions for the quantum many-body problem at very high precision. The approach is based on a tensor train parsimonious representation of the sum of all Feynman diagrams, obtained in a controlled and accurate way with the tensor cross interpolation algorithm. It yields the full time evolution of physical quantities in the presence of any arbitrary time-dependent interaction. Our benchmarks on the Anderson quantum impurity problem, within the real-time nonequilibrium Schwinger-Keldysh formalism, demonstrate that this technique supersedes diagrammatic quantum Monte Carlo by orders of magnitude in precision and speed, with convergence rates 1/N2 or faster, where N is the number of function evaluations. The method also works in parameter regimes characterized by strongly oscillatory integrals in high dimension, which suffer from a catastrophic sign problem in quantum Monte Carlo calculations. Finally, we also present two exploratory studies showing that the technique generalizes to more complex situations: a double quantum dot and a single impurity embedded in a two-dimensional lattice.
引用
收藏
页数:30
相关论文
共 50 条
  • [21] Tensor network compressed sensing with unsupervised machine learning
    Ran, Shi-Ju
    Sun, Zheng-Zhi
    Fei, Shao-Ming
    Su, Gang
    Lewenstein, Maciej
    PHYSICAL REVIEW RESEARCH, 2020, 2 (03):
  • [22] Scalable and Sound Low-Rank Tensor Learning
    Cheng, Hao
    Yu, Yaoliang
    Zhang, Xinhua
    Xing, Eric
    Schuurmans, Dale
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 1114 - 1123
  • [23] Learning Polynomial Transformations via Generalized Tensor Decompositions
    Chen, Sitan
    Li, Jerry
    Li, Yuanzhi
    Zhang, Anru R.
    PROCEEDINGS OF THE 55TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING, STOC 2023, 2023, : 1671 - 1684
  • [24] Sample Efficient Learning of Factored Embeddings of Tensor Fields
    Heo, Taemin
    Bajaj, Chandra
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [25] Digital spiking neuron and its learning for approximation of various spike-trains
    Torikai, Hiroyuki
    Funew, Atsuo
    Saito, Toshimichi
    NEURAL NETWORKS, 2008, 21 (2-3) : 140 - 149
  • [26] LEARNING EFFICIENT TENSOR REPRESENTATIONS WITH RING-STRUCTURED NETWORKS
    Zhao, Qibin
    Sugiyama, Masashi
    Yuan, Longhao
    Cichocki, Andrzej
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8608 - 8612
  • [27] Learning from Multiway Data: Simple and Efficient Tensor Regression
    Yu, Rose
    Liu, Yan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [28] Learning Diagonal Gaussian Mixture Models and Incomplete Tensor Decompositions
    Guo, Bingni
    Nie, Jiawang
    Yang, Zi
    VIETNAM JOURNAL OF MATHEMATICS, 2022, 50 (02) : 421 - 446
  • [29] TENSOR-BASED ALGORITHMS FOR LEARNING MULTIDIMENSIONAL SEPARABLE DICTIONARIES
    Roemer, Florian
    Del Galdo, Giovanni
    Haardt, Martin
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [30] Learning Good State and Action Representations via Tensor Decomposition
    Ni, Chengzhuo
    Zhang, Anru R.
    Duan, Yaqi
    Wang, Mengdi
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 1682 - 1687