Random vector functional link networks for function approximation on manifolds

被引:0
|
作者
Needell, Deanna [1 ]
Nelson, Aaron A. [2 ]
Saab, Rayan [3 ,4 ]
Salanevich, Palina [5 ]
Schavemaker, Olov [5 ]
机构
[1] Univ Calif Los Angeles, Dept Math, Los Angeles, CA 90095 USA
[2] US Air Force Acad, Dept Math Sci, Colorado Springs, CO USA
[3] Univ Calif San Diego, Dept Math, San Diego, CA USA
[4] Univ Calif San Diego, Halicioglu Data Sci Inst, San Diego, CA USA
[5] Univ Utrecht, Math Inst, Utrecht, Netherlands
基金
美国国家科学基金会;
关键词
machine learning; feed-forward neural networks; function approximation; smooth manifold; random vector functional link; ADAPTIVE FUNCTION APPROXIMATION; NEURAL-NETWORKS; FEEDFORWARD NETWORKS; STOCHASTIC CHOICE; NET; BOUNDS;
D O I
10.3389/fams.2024.1284706
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this study, we begin to fill this theoretical gap. We then extend this result to the non-asymptotic setting using a concentration inequality for Monte-Carlo integral approximations. We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error squared decaying asymptotically like O(1/n) for the number n of network nodes. We then extend this result to the non-asymptotic setting, proving that one can achieve any desired approximation error with high probability provided n is sufficiently large. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic forms. Finally, we illustrate our results on manifolds with numerical experiments.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] A comprehensive evaluation of random vector functional link networks
    Zhang, Le
    Suganthan, P. N.
    INFORMATION SCIENCES, 2016, 367 : 1094 - 1105
  • [2] Distributed learning for Random Vector Functional-Link networks
    Scardapane, Simone
    Wang, Dianhui
    Panella, Massimo
    Uncini, Aurelio
    INFORMATION SCIENCES, 2015, 301 : 271 - 284
  • [3] A hybrid regularization approach for random vector functional-link networks
    Ye, Hailiang
    Cao, Feilong
    Wang, Dianhui
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 140 (140)
  • [4] A TRAINING PROCEDURE FOR QUANTUM RANDOM VECTOR FUNCTIONAL-LINK NETWORKS
    Panella, Massimo
    Rosato, Antonello
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7973 - 7977
  • [5] Hardware implementation methods in Random Vector Functional-Link Networks
    José M. Martínez-Villena
    Alfredo Rosado-Muñoz
    Emilio Soria-Olivas
    Applied Intelligence, 2014, 41 : 184 - 195
  • [6] Hardware implementation methods in Random Vector Functional-Link Networks
    Martinez-Villena, Jose M.
    Rosado-Munoz, Alfredo
    Soria-Olivas, Emilio
    APPLIED INTELLIGENCE, 2014, 41 (01) : 184 - 195
  • [7] Finite Precision Implementation of Random Vector Functional-Link Networks
    Rosato, Antonello
    Altilio, Rosa
    Panella, Massimo
    2017 22ND INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2017,
  • [8] ONLINE CONTINUAL LEARNING USING ENHANCED RANDOM VECTOR FUNCTIONAL LINK NETWORKS
    Wong, Cheryl Sze Yin
    Yang, Guo
    Ambikapathi, Arulmurugan
    Savitha, Ramasamy
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1905 - 1909
  • [9] Modeling uncertain processes with interval random vector functional-link networks
    Guan, Shouping
    Cui, Zhouying
    JOURNAL OF PROCESS CONTROL, 2020, 93 : 43 - 52
  • [10] Optimized incremental random vector functional-link networks and its application
    Jiang Y.
    Zhou P.
    Huagong Xuebao/CIESC Journal, 2019, 70 (12): : 4710 - 4721