A general procedure for learning mixtures of independent component analyzers

被引:33
作者
Salazar, Addisson [1 ]
Vergara, Luis [1 ]
Serrano, Arturo [1 ]
Igual, Jorge [1 ]
机构
[1] Univ Politecn Valencia, Signal Proc Grp GTS, Inst Telecommun & Multimedia Applicat ITEAM, Valencia 46022, Spain
关键词
ICA mixture model; ICA; BSS; Non-parametric density estimation; Semi-supervised learning; BLIND SOURCE SEPARATION; UNSUPERVISED CLASSIFICATION; ICA; MODEL; ALGORITHMS;
D O I
10.1016/j.patcog.2009.05.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new procedure for learning mixtures of independent component analyzers. The procedure includes non-parametric estimation of the source densities, supervised-unsupervised learning of the model parameters, incorporation of any independent component analysis (ICA) algorithm into the learning of the ICA mixtures, and estimation of residual dependencies after training for correction of the posterior probability of every class to the testing observation vector. We demonstrate the performance of the procedure in the classification of ICA mixtures of two, three, and four classes of synthetic data, and in the classification of defective materials, consisting of 3D finite element models and lab specimens, in non-destructive testing using the impact-echo technique. The application of the proposed posterior probability correction demonstrates an improvement in the classification accuracy. Semi-supervised learning shows that unlabeled data can degrade the performance of the classifier when they do not fit the generative model. Comparative results of the proposed method and standard ICA algorithms for blind source separation in one and multiple ICA data mixtures show the suitability of the non-parametric ICA mixture-based method for data modeling. (C) 2009 Elsevier Ltd. All rights reserved.
引用
收藏
页码:69 / 85
页数:17
相关论文
共 68 条
[1]   Nonholonomic orthogonal learning algorithms for blind source separation [J].
Amari, S ;
Chen, TP ;
Cichocki, A .
NEURAL COMPUTATION, 2000, 12 (06) :1463-1484
[2]   Natural gradient works efficiently in learning [J].
Amari, S .
NEURAL COMPUTATION, 1998, 10 (02) :251-276
[3]  
[Anonymous], 2006, BOOK REV IEEE T NEUR
[4]  
[Anonymous], 1997, Impact-echo, Non-destructive evaluation of concrete and masonry
[5]  
[Anonymous], 2001, Intelligent signal processing
[6]  
[Anonymous], 2004, LANGUAGE KNOWLEDGE R
[7]  
[Anonymous], 2003, Statistical pattern recognition
[8]  
[Anonymous], 2002, Journal of machine learning research
[9]  
[Anonymous], 2001, Neural Networks: A Comprehensive Foundation
[10]  
[Anonymous], P INT C ART NEUR NET