Learnability and automatizability

被引:19
作者
Alekhnovich, M [1 ]
Braverman, M [1 ]
Feldman, V [1 ]
Klivans, AR [1 ]
Pitassi, T [1 ]
机构
[1] IAS, Princeton, NJ USA
来源
45TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, PROCEEDINGS | 2004年
关键词
D O I
10.1109/FOCS.2004.36
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We consider the complexity of properly teaming concept classes, i.e. when the learner must output a hypothesis of the same form as the unknown concept. We present the following new tipper and lower bounds on well-known concept classes: We show that unless NP = RP, there is no polynomial-time PAC learning algorithm for DNF formulae where the hypothesis is an OR-of-thresholds. Note that as special cases, we show that neither DNF nor OR-of-thresholds are properly learnable unless NP = RP.. Previous hardness results have required strong restrictions on the size of the output DNF formula. We also prove that it is NP-hard to learn the intersection of l greater than or equal to 2 halfspaces by the intersection of k halfspaces for any constant k greater than or equal to 0. Previous work held for the case when k = l. Assuming that NP not subset of or equal to DTIME(2(nepsilon)) for a certain constant epsilon < 1 we show that it is not possible to learn size s decision trees by size s(k) decision trees for any k greater than or equal to 0. Previous hardness results for learning decision trees held for k less than or equal to 2. We present the first non-trivial upper bounds on properly learning DNF formulae and decision trees. In particular we show how to learn size s DNF by DNF in time 2((2) over bar(rootn log s)), and how to learn size s decision trees by decision trees in time n(O(log s)). The hardness results for DNF formulae and intersections of halfspaces are obtained via specialized graph products for amplifying the hardness of approximating the chromatic number as well as applying recent work on the hardness of approximate hypergraph coloring. The hardness results for decision trees, as well as the new tipper bounds, are obtained by developing a connection between automatizability in proof complexity and learnability; which may have other applications.
引用
收藏
页码:621 / 630
页数:10
相关论文
共 35 条
[1]  
ALEKNOVICH M, 2001, IEEE S FDN COMP SCI
[2]  
ALEKNOVICH M, 2004, UNPUB INAPPROXIMABIL
[3]  
[Anonymous], 2001, P 33 ANN ACM S THEOR
[4]   Hardness results for neural network approximation problems [J].
Bartlett, PL ;
Ben-David, S .
THEORETICAL COMPUTER SCIENCE, 2002, 284 (01) :53-66
[5]  
Beame P., 1998, Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, P561, DOI 10.1145/276698.276870
[6]  
Ben-Sasson E., 1999, Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, P517, DOI 10.1145/301250.301392
[7]   TRAINING A 3-NODE NEURAL NETWORK IS NP-COMPLETE [J].
BLUM, AL ;
RIVEST, RL .
NEURAL NETWORKS, 1992, 5 (01) :117-127
[8]   OCCAM RAZOR [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
INFORMATION PROCESSING LETTERS, 1987, 24 (06) :377-380
[9]   Lower bounds for cutting planes proofs with small coefficients [J].
Bonet, M ;
Pitassi, T ;
Raz, R .
JOURNAL OF SYMBOLIC LOGIC, 1997, 62 (03) :708-728
[10]   A subexponential exact learning algorithm for DNF using equivalence queries [J].
Bshouty, NH .
INFORMATION PROCESSING LETTERS, 1996, 59 (01) :37-39