Learning and approximation by Gaussians on Riemannian manifolds

被引:0
作者
Gui-Bo Ye
Ding-Xuan Zhou
机构
[1] Fudan University,School of Mathematical Sciences
[2] City University of Hong Kong,Department of Mathematics
来源
Advances in Computational Mathematics | 2008年 / 29卷
关键词
Learning theory; Reproducing kernel Hilbert spaces; Gaussian kernels; Approximation; Riemannian manifolds; Multi-kernel least square regularization scheme; 68T05; 62J02;
D O I
暂无
中图分类号
学科分类号
摘要
Learning function relations or understanding structures of data lying in manifolds embedded in huge dimensional Euclidean spaces is an important topic in learning theory. In this paper we study the approximation and learning by Gaussians of functions defined on a d-dimensional connected compact C∞ Riemannian submanifold of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\rm I\!R}^n$\end{document} which is isometrically embedded. We show that the convolution with the Gaussian kernel with variance σ provides the uniform approximation order of O(σs) when the approximated function is Lipschitz s ∈(0, 1]. The uniform normal neighborhoods of a compact Riemannian manifold play a central role in deriving the approximation order. This approximation result is used to investigate the regression learning algorithm generated by the multi-kernel least square regularization scheme associated with Gaussian kernels with flexible variances. When the regression function is Lipschitz s, our learning rate is (log2m)/m)s/(8 s + 4 d) where m is the sample size. When the manifold dimension d is smaller than the dimension n of the underlying Euclidean space, this rate is much faster compared with those in the literature. By comparing approximation orders, we also show the essential difference between approximation schemes with flexible variances and those with a single variance.
引用
收藏
页码:291 / 310
页数:19
相关论文
共 30 条
[1]  
Aronszajn N.(1950)Theory of reproducing kernels Trans. Amer. Math. Soc. 68 337-404
[2]  
Belkin M.(2004)Semi-supervised learning on Riemannian manifolds Mach. Learn. 56 209-239
[3]  
Niyogi P.(2003)Laplacian eigenmaps for dimensionality reduction and data representation Neural Comput. 15 1373-1396
[4]  
Belkin M.(2004)Support vector machine soft margin classifiers: error analysis J. Mach. Learn. Res. 5 1143-1175
[5]  
Niyogi P.(2005)Model selection for regularized least-squares algorithm in learning theory Found. Comput. Math. 5 59-85
[6]  
Chen D.R.(2003)Hessian eigenmaps: locally linear embedding techniques for high-dimensional data Proc. Natl. Acad. Sci. USA 100 5591-5596
[7]  
Wu Q.(2000)Regularization networks and suport vector machines Adv. Comput. Math. 13 1-50
[8]  
Ying Y.(2006)Empirical graph Laplacian approximation of Laplace–Beltrami operators: large sample results IMS Lecture Notes-monograph Series High Dimensional Probability 51 238-259
[9]  
Zhou D.X.(2003)Estimating the approximation error in learning theory Anal. Appl. 1 17-41
[10]  
De Vito E.(2007)Multi-kernel regularized classifiers J. Complexity 23 108-134