NEURAL NETS WITH SUPERLINEAR VC-DIMENSION

被引:29
作者
MAASS, W
机构
关键词
D O I
10.1162/neco.1994.6.5.877
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It has been known for quite a while that the Vapnik-Chervonenkis dimension (VC-dimension) of a feedforward neural net with linear threshold gates is at most O(w.log w), where w is the total number of weights in the neural net. We show in this paper that this bound is in fact asymptotically optimal. More precisely, we exhibit for any depth d greater than or equal to 3 a large class of feedforward neural nets of depth d with w weights that have VC-dimension Omega(w.log w). This lower bound holds even if the inputs are restricted to Boolean values. The proof of this result relies on a new method that allows us to encode more ''program-bits'' in the weights of a neural net than previously thought possible.
引用
收藏
页码:877 / 884
页数:8
相关论文
共 13 条
  • [1] BARTLETT PL, 1993, 6TH P WORKSH C LEARN, P144
  • [2] What Size Net Gives Valid Generalization?
    Baum, Eric B.
    Haussler, David
    [J]. NEURAL COMPUTATION, 1989, 1 (01) : 151 - 160
  • [3] LEARNABILITY AND THE VAPNIK-CHERVONENKIS DIMENSION
    BLUMER, A
    EHRENFEUCHT, A
    HAUSSLER, D
    WARMUTH, MK
    [J]. JOURNAL OF THE ACM, 1989, 36 (04) : 929 - 965
  • [4] COVER TM, 1964, SEL61071 STANF TECH
  • [5] A GENERAL LOWER BOUND ON THE NUMBER OF EXAMPLES NEEDED FOR LEARNING
    EHRENFEUCHT, A
    HAUSSLER, D
    KEARNS, M
    VALIANT, L
    [J]. INFORMATION AND COMPUTATION, 1989, 82 (03) : 247 - 261
  • [6] Lupanov O. B., 1972, Soviet Physics - Doklady, V17, P91
  • [7] MAASS W, 1992, IIG349 TU GRAZ REP
  • [8] MAASS W, 1993, 25TH P ANN ACM S THE, P335
  • [9] MAASS W, 1991, 32ND P IEEE S F COMP, P767
  • [10] NECIPORUK EI, 1964, AUTOMATION EXPRESS, V7, P35