Riemannian manifold learning

被引:314
作者
Lin, Tong [1 ]
Zha, Hongbin [1 ]
机构
[1] Peking Univ, Sch EECS, Key Lab Machine Percept, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
dimensionality reduction; manifold learning; manifold reconstruction; Riemannian manifolds; Riemannian normal coordinates;
D O I
10.1109/TPAMI.2007.70735
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, manifold learning has been widely exploited in pattern recognition, data analysis, and machine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input high-dimensional data into a low-dimensional space. Experiments on synthetic data, as well as real-world images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.
引用
收藏
页码:796 / 809
页数:14
相关论文
共 57 条
[1]  
[Anonymous], 200227 STANF U DEP S
[2]  
[Anonymous], 2003, Advances in NIPS 15, DOI DOI 10.1109/34.682189
[3]  
[Anonymous], 2005, Proc. of 22nd International Conference on Machine Learning
[4]  
[Anonymous], NATURE
[5]   Mathematics in the 20th century [J].
Atiyah, M .
BULLETIN OF THE LONDON MATHEMATICAL SOCIETY, 2002, 34 :1-15
[6]  
Balasubramanian M, 2002, SCIENCE, V295
[7]   NEURAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS - LEARNING FROM EXAMPLES WITHOUT LOCAL MINIMA [J].
BALDI, P ;
HORNIK, K .
NEURAL NETWORKS, 1989, 2 (01) :53-58
[8]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[9]  
Bengio Y, 2004, ADV NEUR IN, V16, P177
[10]   GTM: The generative topographic mapping [J].
Bishop, CM ;
Svensen, M ;
Williams, CKI .
NEURAL COMPUTATION, 1998, 10 (01) :215-234