Variable Weighting in Fuzzy k-Means Clustering to Determine the Number of Clusters

被引:39
作者
Khan, Imran [1 ]
Luo, Zongwei [1 ]
Huang, Joshua Zhexue [2 ]
Shahzad, Waseem [3 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen Key Lab Computat Intelligence, Shenzhen 518055, Guangdong, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Guangdong, Peoples R China
[3] Natl Univ Comp & Emerging Sci, Dept Comp Sci, Islamabad 44000, Pakistan
关键词
Fuzzy k-means; clustering; number of clusters; data mining; variable weighting; MEANS ALGORITHM; DATA SETS; SELECTION; CENTERS; MODEL;
D O I
10.1109/TKDE.2019.2911582
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the most significant problems in cluster analysis is to determine the number of clusters in unlabeled data, which is the input for most clustering algorithms. Some methods have been developed to address this problem. However, little attention has been paid on algorithms that are insensitive to the initialization of cluster centers and utilize variable weights to recover the number of clusters. To fill this gap, we extend the standard fuzzy k-means clustering algorithm. It can automatically determine the number of clusters by iteratively calculating the weights of all variables and the membership value of each object in all clusters. Two new steps are added to the fuzzy k-means clustering process. One of them is to introduce a penalty term to make the clustering process insensitive to the initial cluster centers. The other one is to utilize a formula for iterative updating of variable weights in each cluster based on the current partition of data. Experimental results on real-world and synthetic datasets have shown that the proposed algorithm effectively determined the correct number of clusters while initializing the different number of cluster centroids. We also tested the proposed algorithm on gene data to determine a subset of important genes.
引用
收藏
页码:1838 / 1853
页数:16
相关论文
共 46 条
[11]   Recovering the number of clusters in data sets with noise features using feature rescaling factors [J].
de Amorim, Renato Cordeiro ;
Hennig, Christian .
INFORMATION SCIENCES, 2015, 324 :126-145
[12]  
Deelers S, 2007, PROC WRLD ACAD SCI E, V26, P323
[13]   BAYESIAN DENSITY-ESTIMATION AND INFERENCE USING MIXTURES [J].
ESCOBAR, MD ;
WEST, M .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1995, 90 (430) :577-588
[14]  
Fei-Fei L, 2005, PROC CVPR IEEE, P524
[15]   BAYESIAN ANALYSIS OF SOME NONPARAMETRIC PROBLEMS [J].
FERGUSON, TS .
ANNALS OF STATISTICS, 1973, 1 (02) :209-230
[16]   Clustering by competitive agglomeration [J].
Frigui, H ;
Krishnapuram, R .
PATTERN RECOGNITION, 1997, 30 (07) :1109-1119
[17]   A tutorial on Bayesian nonparametric models [J].
Gershman, Samuel J. ;
Blei, David M. .
JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2012, 56 (01) :1-12
[18]   Cluster number selection for a small set of samples using the Bayesian Ying-Yang model [J].
Guo, P ;
Chen, CLP ;
Lyu, MR .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (03) :757-763
[19]  
Hamerly G, 2004, ADV NEUR IN, V16, P281
[20]  
Hjort N. L., 2010, BAYESIAN NONPARAMETR, V28, P36