Stochastic gradient descent for hybrid quantum-classical optimization

被引:163
作者
Sweke, Ryan [1 ]
Wilde, Frederik [1 ]
Meyer, Johannes Jakob [1 ]
Schuld, Maria [2 ,3 ]
Faehrmann, Paul K. [1 ]
Meynard-Piganeau, Barthelemy [4 ]
Eisert, Jens [1 ,5 ,6 ]
机构
[1] Free Univ Berlin, Dahlem Ctr Complex Quantum Syst, D-14195 Berlin, Germany
[2] Xanadu, 777 Bay St, Toronto, ON, Canada
[3] Univ KwaZulu Natal, Quantum Res Grp, ZA-4000 Durban, South Africa
[4] Ecole Polytech, Dept Phys, Palaiseau, France
[5] Helmholtz Ctr Berlin, D-14109 Berlin, Germany
[6] Free Univ Berlin, Dept Math & Comp Sci, D-14195 Berlin, Germany
基金
欧盟地平线“2020”;
关键词
SUPREMACY;
D O I
10.22331/q-2020-08-31-314
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require the evaluation of expectation values with respect to the outcome of parameterized quantum circuits. In this work, we explore the consequences of the prior observation that estimation of these quantities on quantum hardware results in a form of stochastic gradient descent optimization. We formalize this notion, which allows us to show that in many relevant cases, including VQE, QAOA and certain quantum classifiers, estimating expectation values with k measurement outcomes results in optimization algorithms whose convergence properties can be rigorously well understood, for any value of k. In fact, even using single measurement outcomes for the estimation of expectation values is sufficient. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values - originating, e.g., from a sum over local terms of a Hamiltonian, a parameter shift rule, or a sum over data-set instances - and we show that in these cases k-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain "doubly stochastic" gradient descent optimizers. For all algorithms we prove convergence guarantees, providing a framework for the derivation of rigorous optimization results in the context of near-term quantum devices. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for state-of-the-art results with significantly fewer circuit executions and measurements.
引用
收藏
页数:29
相关论文
共 57 条
[1]  
Aaronson S, 2011, ACM S THEORY COMPUT, P333
[2]  
[Anonymous], NATURE
[3]  
[Anonymous], 2017, Phys. Rev. Lett., DOI DOI 10.1103/PHYSREVLETT.118.040502
[4]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[5]  
[Anonymous], 2019, ARXIV190108047
[6]  
[Anonymous], 2016, DEEP LEARNING
[7]  
[Anonymous], ARXIV190105374
[8]  
Arunachalam S., 2019, P 30 ANN ACM SIAM S, P1425, DOI [10.1137/1.9781611975482.87, DOI 10.1137/1.9781611975482.87]
[9]   Quantum supremacy using a programmable superconducting processor [J].
Arute, Frank ;
Arya, Kunal ;
Babbush, Ryan ;
Bacon, Dave ;
Bardin, Joseph C. ;
Barends, Rami ;
Biswas, Rupak ;
Boixo, Sergio ;
Brandao, Fernando G. S. L. ;
Buell, David A. ;
Burkett, Brian ;
Chen, Yu ;
Chen, Zijun ;
Chiaro, Ben ;
Collins, Roberto ;
Courtney, William ;
Dunsworth, Andrew ;
Farhi, Edward ;
Foxen, Brooks ;
Fowler, Austin ;
Gidney, Craig ;
Giustina, Marissa ;
Graff, Rob ;
Guerin, Keith ;
Habegger, Steve ;
Harrigan, Matthew P. ;
Hartmann, Michael J. ;
Ho, Alan ;
Hoffmann, Markus ;
Huang, Trent ;
Humble, Travis S. ;
Isakov, Sergei V. ;
Jeffrey, Evan ;
Jiang, Zhang ;
Kafri, Dvir ;
Kechedzhi, Kostyantyn ;
Kelly, Julian ;
Klimov, Paul V. ;
Knysh, Sergey ;
Korotkov, Alexander ;
Kostritsa, Fedor ;
Landhuis, David ;
Lindmark, Mike ;
Lucero, Erik ;
Lyakh, Dmitry ;
Mandra, Salvatore ;
McClean, Jarrod R. ;
McEwen, Matthew ;
Megrant, Anthony ;
Mi, Xiao .
NATURE, 2019, 574 (7779) :505-+
[10]   Parameterized quantum circuits as machine learning models [J].
Benedetti, Marcello ;
Lloyd, Erika ;
Sack, Stefan ;
Fiorentini, Mattia .
QUANTUM SCIENCE AND TECHNOLOGY, 2019, 4 (04)