Constructing hard functions from learning algorithms

被引:17
作者
Klivans, Adam [1 ]
Kothari, Pravesh [1 ]
Oliveira, Igor C. [2 ]
机构
[1] UT Austin, Austin, TX 78712 USA
[2] Columbia Univ, New York, NY 10027 USA
来源
2013 IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY (CCC) | 2013年
基金
美国国家科学基金会;
关键词
algorithms; circuit lower bounds; computational complexity; computational learning theory; CIRCUIT LOWER BOUNDS; STATISTICAL QUERIES; COMPLEXITY;
D O I
10.1109/CCC.2013.18
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class C subset of P/poly of Boolean circuits is exactly learnable with membership and equivalence queries in polynomial-time, then EXPNP not subset of C (the class EXPNP was subsequently improved to EXP by Hitchcock and Harkins). In this paper, we improve on these results and show If C is exactly learnable with membership and equivalence queries in polynomial-time, then DTIME(n(omega(1))) not subset of C. We obtain even stronger consequences if the class C is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against C. If C is learnable in polynomial time in the PAC model then PSPACE not subset of C, unless PSPACE not subset of BPP. Removing this extra assumption from the statement of the theorem would provide an unconditional proof that PSPACE not subset of BPP. If C is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function f that is average-case hard for circuits in C. To the best of our knowledge, this result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model. Similar results hold when the learning algorithm runs in subexponential time. Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to "diagonalize" over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-onaverage functions.
引用
收藏
页码:86 / 97
页数:12
相关论文
共 28 条
[1]  
[Anonymous], MACH LEARN
[2]  
[Anonymous], 1978, S FDN COMP SCI FOCS
[3]  
Babai L., 1993, Computational Complexity, V3, P307, DOI 10.1007/BF01275486
[4]  
Blum A., 1994, STOC, P253
[5]   OCCAM RAZOR [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
INFORMATION PROCESSING LETTERS, 1987, 24 (06) :377-380
[6]   On using extended statistical queries to avoid membership queries [J].
Bshouty, NH ;
Feldman, V .
JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (03) :359-395
[7]  
Chattopadhyay Eshan, 2012, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. Proceedings 15th International Workshop, APPROX 2012 and 16th International Workshop, RANDOM 2012, P495, DOI 10.1007/978-3-642-32512-0_42
[8]  
Chazelle B., 2001, NUMERICAL MATH SCI C
[9]  
Feldman V, 2008, ACM S THEORY COMPUT, P619
[10]   Efficient learning algorithms yield circuit lower bounds [J].
Fortnow, Lance ;
Klivans, Adam R. .
JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2009, 75 (01) :27-36