On learning monotone Boolean functions under the uniform distribution

被引:6
|
作者
Amano, K [1 ]
Maruoka, A [1 ]
机构
[1] Tohoku Univ, Grad Sch Informat Sci, Sendai, Miyagi 9808579, Japan
关键词
PAC learning; monotone Boolean functions; harmonic analysis; majority function;
D O I
10.1016/j.tcs.2005.10.012
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In this paper, we prove two general theorems on monotone Boolean functions which are useful for constructing a learning algorithm for monotone Boolean functions under the uniform distribution. A monotone Boolean function is called fair if it takes the value 1 on exactly half of its inputs. The first result proved in this paper is that a single variable function f (x) = x(i) has the minimum correlation with the majority function among all fair monotone functions. This proves the conjecture by Blum et al. (1998, Proc. 39th FOCS, pp. 408-415) and improves the performance guarantee of the best known learning algorithm for monotone Boolean functions under the uniform distribution they proposed. Our second result is on the relationship between the influences and the average sensitivity of a monotone Boolean function. The influence of variable x(i) on f is defined as the probability that f (x) differs from f (x circle plus e(i)) where x is chosen uniformly from (0, 1)(n) and x (circle plus) e(i) means x with its ith bit flipped. The average sensitivity of f is defined as the sum of the influences over all variables x(i). We prove that a somewhat unintuitive result which says if the influence of every variable on a monotone Boolean function is small, i.e., O(1/n(c)) for some constant c > 0, then the average sensitivity of the function must be large, i.e., Omega(log n). We also discuss how to apply this result to the construction of a new learning algorithm for monotone Boolean functions. (c) 2005 Elsevier B.V. All rights reserved.
引用
收藏
页码:3 / 12
页数:10
相关论文
共 50 条
  • [41] Minimum self-dual decompositions of positive dual-minor Boolean functions
    Bioch, JC
    Ibaraki, T
    Makino, K
    DISCRETE APPLIED MATHEMATICS, 1999, 97 : 307 - 326
  • [42] Structure and Learning of Valuation Functions
    Feldman, Vitaly
    Vondrak, Jan
    ACM SIGECOM EXCHANGES, 2013, 12 (02) : 50 - 53
  • [43] On learning embedded midbit functions
    Servedio, RA
    THEORETICAL COMPUTER SCIENCE, 2006, 350 (01) : 13 - 23
  • [44] Learning functions of k relevant variables
    Mossel, E
    O'Donnell, R
    Servedio, RA
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2004, 69 (03) : 421 - 434
  • [45] Learning a Random DFA from Uniform Strings and State Information
    Angluin, Dana
    Chen, Dongqu
    ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 119 - 133
  • [46] Differentially Private Release and Learning of Threshold Functions
    Bun, Mark
    Nissim, Kobbi
    Stemmer, Uri
    Vadhan, Salil
    2015 IEEE 56TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 2015, : 634 - 649
  • [47] Distribution Free Learning with Local Queries
    Bary-Weisberg, Galit
    Daniely, Amit
    Shalev-Shwartz, Shai
    ALGORITHMIC LEARNING THEORY, VOL 117, 2020, 117 : 133 - 147
  • [48] Characterization and Selection of Probability Statistical Parameters in Random Slope PWM Based on Uniform Distribution
    Xu, Jie
    Nie, Ziling
    Zhu, Junjie
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2021, 36 (01) : 1184 - 1192
  • [49] PAC learning under helpful distributions
    Denis, F
    Gilleron, R
    RAIRO-INFORMATIQUE THEORIQUE ET APPLICATIONS-THEORETICAL INFORMATICS AND APPLICATIONS, 2001, 35 (02): : 129 - 148
  • [50] LEARNING SIMPLE CONCEPTS UNDER SIMPLE DISTRIBUTIONS
    MING, L
    VITANYI, PMB
    SIAM JOURNAL ON COMPUTING, 1991, 20 (05) : 911 - 935