Generalization in quantum machine learning from few training data

被引:173
作者
Caro, Matthias C. [1 ,2 ]
Huang, Hsin-Yuan [3 ,4 ]
Cerezo, M. [5 ,6 ]
Sharma, Kunal [7 ]
Sornborger, Andrew [5 ,8 ]
Cincio, Lukasz [9 ]
Coles, Patrick J. [9 ]
机构
[1] Tech Univ Munich, Dept Math, Garching, Germany
[2] Munich Ctr Quantum Sci & Technol MCQST, Munich, Germany
[3] CALTECH, Inst Quantum Informat & Matter, Pasadena, CA 91125 USA
[4] CALTECH, Dept Comp & Math Sci, Pasadena, CA 91125 USA
[5] Los Alamos Natl Lab, Informat Sci, Los Alamos, NM 87545 USA
[6] Los Alamos Natl Lab, Ctr Nonlinear Studies, Los Alamos, NM 87545 USA
[7] Univ Maryland, Joint Ctr Quantum Informat & Comp Sci, College Pk, MD 20742 USA
[8] Quantum Sci Ctr, Oak Ridge, TN 37931 USA
[9] Los Alamos Natl Lab, Theoret Div, Los Alamos, NM 87545 USA
关键词
D O I
10.1038/s41467-022-32550-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as root T/N. When only K << T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to root K/N. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
引用
收藏
页数:11
相关论文
共 98 条
  • [11] Bu KF, 2021, Arxiv, DOI arXiv:2102.03282
  • [12] Bu KF, 2021, Arxiv, DOI arXiv:2103.03139
  • [13] Statistical complexity of quantum circuits
    Bu, Kaifeng
    Koh, Dax Enshan
    Li, Lu
    Luo, Qingxian
    Zhang, Yaobo
    [J]. PHYSICAL REVIEW A, 2022, 105 (06)
  • [14] Sample complexity of learning parametric quantum circuits
    Cai, Haoyuan
    Ye, Qi
    Deng, Dong-Ling
    [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (02)
  • [15] Encoding-dependent generalization bounds for parametrized quantum circuits
    Caro, Matthias C.
    Gil-Fuster, Elies
    Meyer, Johannes Jakob
    Eisert, Jens
    Sweke, Ryan
    [J]. QUANTUM, 2021, 5
  • [16] Pseudo-dimension of quantum circuits
    Caro, Matthias C.
    Datta, Ishaun
    [J]. QUANTUM MACHINE INTELLIGENCE, 2020, 2 (02)
  • [17] Variational quantum algorithms
    Cerezo, M.
    Arrasmith, Andrew
    Babbush, Ryan
    Benjamin, Simon C.
    Endo, Suguru
    Fujii, Keisuke
    McClean, Jarrod R.
    Mitarai, Kosuke
    Yuan, Xiao
    Cincio, Lukasz
    Coles, Patrick J.
    [J]. NATURE REVIEWS PHYSICS, 2021, 3 (09) : 625 - 644
  • [18] Higher order derivatives of quantum neural networks with barren plateaus
    Cerezo, M.
    Coles, Patrick J.
    [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2021, 6 (03)
  • [19] Cost function dependent barren plateaus in shallow parametrized quantum circuits
    Cerezo, M.
    Sone, Akira
    Volkoff, Tyler
    Cincio, Lukasz
    Coles, Patrick J.
    [J]. NATURE COMMUNICATIONS, 2021, 12 (01)
  • [20] On the Expressibility and Overfitting of Quantum Circuit Learning
    Chen, Chih-Chieh
    Watabe, Masaya
    Shiba, Kodai
    Sogabe, Masaru
    Sakamoto, Katsuyoshi
    Sogabe, Tomah
    [J]. ACM TRANSACTIONS ON QUANTUM COMPUTING, 2021, 2 (02):