Approximating functions with multi-features by deep convolutional neural networks

被引:36
作者
Mao, Tong [1 ]
Shi, Zhongjie [2 ]
Zhou, Ding-Xuan [3 ]
机构
[1] Claremont Grad Univ, Inst Math Sci, 710 N Coll Ave, Claremont, CA 91711 USA
[2] Katholieke Univ Leuven, Dept Elect Engn, ESAT STADIUS, Kasteelpk Arenberg 10, B-3001 Leuven, Belgium
[3] Univ Sydney, Sch Math & Stat, Sydney, NSW 2006, Australia
基金
美国国家科学基金会;
关键词
Deep learning; convolutional neural networks; rates of approximation; curse of dimensionality; feature extraction; ERROR-BOUNDS;
D O I
10.1142/S0219530522400085
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Deep convolutional neural networks (DCNNs) have achieved great empirical success in many fields such as natural language processing, computer vision, and pattern recognition. But there still lacks theoretical understanding of the flexibility and adaptivity of DCNNs in various learning tasks, and the power of DCNNs at feature extraction. We propose a generic DCNN structure consisting of two groups of convolutional layers associated with two downsampling operators, and a fully connected layer, which is determined only by three structural parameters. Our generic DCNNs are capable of extracting various features including not only polynomial features but also general smooth features. We also show that the curse of dimensionality can be circumvented by our DCNNs for target functions of the compositional form with (symmetric) polynomial features, spatially sparse smooth features, and interaction features. These demonstrate the expressive power of our DCNN structure, while the model selection can be relaxed comparing with other deep neural networks since there are only three hyperparameters controlling the architecture to tune.
引用
收藏
页码:93 / 125
页数:33
相关论文
共 41 条
  • [21] On best approximation by ridge functions
    Maiorov, VE
    [J]. JOURNAL OF APPROXIMATION THEORY, 1999, 99 (01) : 68 - 94
  • [22] Understanding deep convolutional networks
    Mallat, Stephane
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2016, 374 (2065):
  • [23] Theory of deep convolutional neural networks III: Approximating radial functions
    Mao, Tong
    Shi, Zhongjie
    Zhou, Ding-Xuan
    [J]. NEURAL NETWORKS, 2021, 144 : 778 - 790
  • [24] Deep vs. shallow networks: An approximation theory perspective
    Mhaskar, H. N.
    Poggio, T.
    [J]. ANALYSIS AND APPLICATIONS, 2016, 14 (06) : 829 - 848
  • [25] Mhaskar H.N., 1993, ADV COMPUT MATH, V1, P61, DOI DOI 10.1007/BF02070821
  • [26] APPROXIMATION BY SUPERPOSITION OF SIGMOIDAL AND RADIAL BASIS FUNCTIONS
    MHASKAR, HN
    MICCHELLI, CA
    [J]. ADVANCES IN APPLIED MATHEMATICS, 1992, 13 (03) : 350 - 373
  • [27] New Error Bounds for Deep ReLU Networks Using Sparse Grids
    Montanelli, Hadrien
    Du, Qiang
    [J]. SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2019, 1 (01): : 78 - 92
  • [28] Oono K, 2019, PR MACH LEARN RES, V97
  • [29] EQUIVALENCE OF APPROXIMATION BY CONVOLUTIONAL NEURAL NETWORKS AND FULLY-CONNECTED NETWORKS
    Petersen, Philipp
    Voigtlaender, Felix
    [J]. PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY, 2020, 148 (04) : 1567 - 1581
  • [30] Pinkus Allan, 2012, N-Widths in Approximation Theory, V7