Optimal Quantum Sample Complexity of Learning Algorithms

被引:6
作者
Arunachalam, Srinivasan [1 ,2 ,3 ]
de Wolf, Ronald [1 ,2 ,3 ]
机构
[1] QuSoft Res Ctr Quantum Software, Amsterdam, Netherlands
[2] CWI Ctr Wiskunde & Informat, Amsterdam, Netherlands
[3] Univ Amsterdam, Amsterdam, Netherlands
来源
32ND COMPUTATIONAL COMPLEXITY CONFERENCE (CCC 2017) | 2017年 / 79卷
关键词
Quantum computing; PAC learning; agnostic learning; VC dimension; LOWER BOUNDS; EXAMPLES; NUMBER; DNF;
D O I
10.4230/LIPIcs.CCC.2017.25
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In learning theory, the VC dimension of a concept class C is the most common way to measure its "richness!' A fundamental result says that the number of examples needed to learn an unknown target concept c is an element of C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model Theta(d/epsilon + log(1/delta/epsilon)) examples are necessary and sufficient for a learner to output, with probability 1-delta, a hypothesis h that is epsilon-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c is an element of C, we know that Theta(d/epsilon(2) + log(1/delta/epsilon(2))) examples are necessary and sufficient to output an hypothesis h is an element of C whose error is at most worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson [18], who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atte' and Servedio [10], improved by Zhang [55], showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is Omega(d(1-eta)/epsilon + d + log(1/delta/epsilon)) for arbitrarily small constant eta > 0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/epsilon) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the "Pretty Good Measurement" on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C.
引用
收藏
页数:31
相关论文
共 55 条
[1]   The learnability of quantum states [J].
Aaronson, Scott .
PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2007, 463 (2088) :3089-3114
[2]   Read the fine print [J].
Aaronson, Scott .
NATURE PHYSICS, 2015, 11 (04) :291-293
[3]  
Aïmeur E, 2006, LECT NOTES ARTIF INT, V4013, P431
[4]   Quantum speed-up for unsupervised learning [J].
Aimeur, Esma ;
Brassard, Gilles ;
Gambs, Sebastien .
MACHINE LEARNING, 2013, 90 (02) :261-287
[5]  
Ambainis A, 2014, QUANTUM INF COMPUT, V14, P439
[6]  
Angluin D., 1988, Machine Learning, V2, P343, DOI 10.1007/BF00116829
[7]  
[Anonymous], 1974, Theory of pattern recognition
[8]  
[Anonymous], 2016, J MACHINE LEARNING R
[9]  
Anthony M., 2009, NEURAL NETWORK LEARN
[10]   Sample size lower bounds in PAC learning by Algorithmic Complexity Theory [J].
Apolloni, B ;
Gentile, C .
THEORETICAL COMPUTER SCIENCE, 1998, 209 (1-2) :141-162