ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY

被引:179
作者
Smale, Steve [1 ]
Zhou, Ding-Xuan [1 ]
机构
[1] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
关键词
Learning theory; approximation error; reproducing kernel Hilbert space; kernel machine learning; interpolation space; logarithmic rate of convergence;
D O I
10.1142/S0219530503000089
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Let B be a Banach space and (H, parallel to center dot parallel to(H)) be a dense, imbedded subspace. For a is an element of B, its distance to the ball of H with radius R (denoted as I(a, R)) tends to zero when R tends to infinity. We are interested in the rate of this convergence. This approximation problem arose from the study of learning theory, where B is the L-2 space and H is a reproducing kernel Hilbert space. The class of elements having I(a, R) = O(R-r) with r > 0 is an interpolation space of the couple (B, H). The rate of convergence can often be realized by linear operators. In particular, this is the case when H is the range of a compact, symmetric, and strictly positive definite linear operator on a separable Hilbert space B. For the kernel approximation studied in learning theory, the rate depends on the regularity of the kernel function. This yields error estimates for the approximation by reproducing kernel Hilbert spaces. When the kernel is smooth, the convergence is slow and a logarithmic convergence rate is presented for analytic kernels in this paper. The purpose of our results is to provide some theoretical estimates, including the constants, for the approximation error required for the learning theory.
引用
收藏
页码:17 / 41
页数:25
相关论文
共 14 条