Neural networks in Fréchet spaces

被引:0
|
作者
Fred Espen Benth
Nils Detering
Luca Galimberti
机构
[1] University of Oslo,Department of Mathematics
[2] University of California at Santa Barbara,Department of Statistics and Applied Probability
[3] Norwegian University of Science and Technology,Department of Mathematical Sciences
关键词
Neural networks; Universal approximation; Fréchet space; Activation function; 68T07; 46T99;
D O I
暂无
中图分类号
学科分类号
摘要
We propose a neural network architecture in infinite dimensional spaces for which we can show the universal approximation property. Indeed, we derive approximation results for continuous functions from a Fréchet space X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathfrak {X}$\end{document} into a Banach space Y\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathfrak {Y}$\end{document}. The approximation results are generalising the well known universal approximation theorem for continuous functions from ℝn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb {R}^{n}$\end{document} to ℝ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb {R}$\end{document}, where approximation is done with (multilayer) neural networks Cybenko (1989) Math. Cont. Signals Syst.2, 303–314 and Hornik et al. (1989) Neural Netw., 2, 359–366 and Funahashi (1989) Neural Netw., 2, 183–192 and Leshno (1993) Neural Netw., 6, 861–867. Our infinite dimensional networks are constructed using activation functions being nonlinear operators and affine transforms. Several examples are given of such activation functions. We show furthermore that our neural networks on infinite dimensional spaces can be projected down to finite dimensional subspaces with any desirable accuracy, thus obtaining approximating networks that are easy to implement and allow for fast computation and fitting. The resulting neural network architecture is therefore applicable for prediction tasks based on functional data.
引用
收藏
页码:75 / 103
页数:28
相关论文
共 50 条