THE CHOW PARAMETERS PROBLEM

被引:36
作者
O'Donnell, Ryan [1 ]
Servedio, Rocco A. [2 ]
机构
[1] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
[2] Columbia Univ, Dept Comp Sci, New York, NY 10027 USA
基金
美国国家科学基金会;
关键词
Chow parameters; threshold functions; approximation; learning theory; RESTRICTED-FOCUS; THRESHOLD; LEARNABILITY; HARDNESS; VECTOR;
D O I
10.1137/090756466
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In [Proceedings of the Second Symposium on Switching Circuit Theory and Logical Design (FOCS), 1961, pp. 34-38], Chow proved that every Boolean threshold function is uniquely determined by its degree-0 and degree-1 Fourier coefficients. These numbers became known as the Chow parameters. Providing an algorithmic version of Chow's theorem-i.e., efficiently constructing a representation of a threshold function given its Chow parameters-has remained open ever since. This problem has received significant study in the fields of circuit complexity, game theory and the design of voting systems, and learning theory. In this paper we effectively solve the problem, giving a randomized polynomial-time approximation scheme with the following behavior: Given the Chow parameters of a Boolean threshold function f over n bits and any constant epsilon > 0, the algorithm runs in time O(n(2) log(2) n) and with high probability outputs a representation of a threshold function f' which is epsilon-close to f. Along the way we prove several new results of independent interest about Boolean threshold functions. In addition to various structural results, these include (O) over tilde (n(2))-time learning algorithms for threshold functions under the uniform distribution in the following models: (i) the restricted focus of attention model, answering an open question of Birkendorf et al.; (ii) an agnostic-type model. This contrasts with recent results of Guruswami and Raghavendra who show NP-hardness for the problem under general distributions; (iii) the PAC model, with constant epsilon. Our (O) over tilde (n(2))-time algorithm substantially improves on the previous best known running time and nearly matches the Omega(n(2)) bits of training data that any successful learning algorithm must use.
引用
收藏
页码:165 / 199
页数:35
相关论文
共 59 条
  • [1] [Anonymous], P COCOON ANN INT C C
  • [2] [Anonymous], 1992, Computational Complexity, DOI 10.1007/BF01200426
  • [3] Aziz H, 2007, INMIC 2007: PROCEEDINGS OF THE 11TH IEEE INTERNATIONAL MULTITOPIC CONFERENCE, P211
  • [4] Banzhaf J.F., 1965, Rutgers Law Review, V19, P317
  • [5] Learning with restricted focus of attention
    Ben-David, S
    Dichterman, E
    [J]. JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 1998, 56 (03) : 277 - 298
  • [6] LEARNABILITY WITH RESPECT TO FIXED DISTRIBUTIONS
    BENEDEK, GM
    ITAI, A
    [J]. THEORETICAL COMPUTER SCIENCE, 1991, 86 (02) : 377 - 389
  • [7] On Restricted-Focus-of-Attention learnability of Boolean functions
    Birkendorf, A
    Dichterman, E
    Jackson, J
    Klasner, N
    Simon, HU
    [J]. MACHINE LEARNING, 1998, 30 (01) : 89 - 123
  • [8] Noise-tolerant learning, the parity problem, and the statistical query model
    Blum, A
    Kalai, A
    Wasserman, H
    [J]. JOURNAL OF THE ACM, 2003, 50 (04) : 506 - 519
  • [9] HARMONIC-ANALYSIS OF POLYNOMIAL THRESHOLD FUNCTIONS
    BRUCK, J
    [J]. SIAM JOURNAL ON DISCRETE MATHEMATICS, 1990, 3 (02) : 168 - 177
  • [10] On the design of voting games
    Carreras, F
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2004, 59 (03) : 503 - 515