MIXTURES OF GAUSSIAN DISTRIBUTIONS UNDER LINEAR DIMENSIONALITY REDUCTION

被引:0
作者
Otoom, Ahmed Fawzi [1 ]
Concha, Oscar Perez [1 ]
Piccardi, Massimo [1 ]
机构
[1] Univ Technol Sydney, Fac Engn & IT, Sydney, NSW, Australia
来源
VISAPP 2010: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2 | 2010年
基金
澳大利亚研究理事会;
关键词
Dimensionality reduction; Linear transformation; Random projections; Mixture models; Object classification;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
High dimensional spaces pose a serious challenge to the learning process. It is a combination of limited number of samples and high dimensions that positions many problems under the "curse of dimensionality", which restricts severely the practical application of density estimation. Many techniques have been proposed in the past to discover embedded, locally-linear manifolds of lower dimensionality, including the mixture of Principal Component Analyzers, the mixture of Probabilistic Principal Component Analyzers and the mixture of Factor Analyzers. In this paper, we present a mixture model for reducing dimensionality based on a linear transformation which is not restricted to be orthogonal. Two methods are proposed for the learning of all the transformations and mixture parameters: the first method is based on an iterative maximum-likelihood approach and the second is based on random transformations and fixed (non iterative) probability functions. For experimental validation, we have used the proposed model for maximum-likelihood classification of five "hard" data sets including data sets from the UCI repository and the authors' own. Moreover, we compared the classification performance of the proposed method with that of other popular classifiers including the mixture of Probabilistic Principal Component Analyzers and the Gaussian mixture model. In all cases but one, the accuracy achieved by the proposed method proved the highest, with increases with respect to the runner-up ranging from 0.2% to 5.2%.
引用
收藏
页码:511 / 518
页数:8
相关论文
共 16 条
[1]  
[Anonymous], 2007, Uci machine learning repository
[2]  
[Anonymous], 2006, Pattern recognition and machine learning
[3]  
[Anonymous], 1987, Latent variable models and factors analysis
[4]  
Bellman R., 1961, Adaptive Control Processes: A Guided Tour, DOI DOI 10.1515/9781400874668
[5]  
Bingham E, 2001, P 7 ACM SIGKDD INT C, P245, DOI DOI 10.1145/502512.502546
[6]   A characterization of principal components for projection pursuit [J].
Bolton, RJ ;
Krzanowski, WJ .
AMERICAN STATISTICIAN, 1999, 53 (02) :108-109
[7]   SUBMODEL SELECTION AND EVALUATION IN REGRESSION - THE X-RANDOM CASE [J].
BREIMAN, L ;
SPECTOR, P .
INTERNATIONAL STATISTICAL REVIEW, 1992, 60 (03) :291-319
[8]  
Fodor I. K., 2002, A Survey of Dimension Reduction Techniques, DOI DOI 10.2172/15002155
[9]  
Ghahramani Z., 1997, The EM algorithm for mixtures of factor analyzers
[10]   Modeling the manifolds of images of handwritten digits [J].
Hinton, GE ;
Dayan, P ;
Revow, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (01) :65-74