Approximate Clustering Ensemble Method for Big Data

被引:13
作者
Mahmud, Mohammad Sultan [1 ,2 ]
Huang, Joshua Zhexue [1 ,2 ]
Ruby, Rukhsana [3 ]
Ngueilbaye, Alladoumbaye [1 ,2 ]
Wu, Kaishun [1 ,2 ]
机构
[1] Shenzhen Univ, Natl Engn Lab Big Data Syst Comp Technol, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Big Data Inst, Shenzhen 518060, Peoples R China
[3] Guangdong Lab Artificial Intelligence & Digital Ec, Shenzhen 518107, Peoples R China
基金
中国国家自然科学基金;
关键词
Clustering approximation method; clustering ensemble; consensus functions; distributed clustering; RSP data model; K-MEANS; I-NICE; NUMBER; ALGORITHM; CONSENSUS; MODELS;
D O I
10.1109/TBDATA.2023.3255003
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Clustering a big distributed dataset of hundred gigabytes or more is a challenging task in distributed computing. A popular method to tackle this problem is to use a random sample of the big dataset to compute an approximate result as an estimation of the true result computed from the entire dataset. In this paper, instead of using a single random sample, we use multiple random samples to compute an ensemble result as the estimation of the true result of the big dataset. We propose a distributed computing framework to compute the ensemble result. In this framework, a big dataset is represented in the RSP data model as random sample data blocks managed in a distributed file system. To compute the ensemble clustering result, a set of RSP data blocks is randomly selected as random samples and clustered independently in parallel on the nodes of a cluster to generate the component clustering results. The component results are transferred to the master node, which computes the ensemble result. Since the random samples are disjoint and traditional consensus functions cannot be used, we propose two new methods to integrate the component clustering results into the final ensemble result. The first method uses component cluster centers to build a graph and the METIS algorithm to cut the graph into subgraphs, from which a set of candidate cluster centers is found. A hierarchical clustering method is then used to generate the final set of k cluster centers. The second method uses the clustering-by-passing-messages method to generate the final set of k cluster centers. Finally, the k-means algorithm was used to allocate the entire dataset into k clusters. Experiments were conducted on both synthetic and real-world datasets. The results show that the new ensemble clustering methods performed better than the comparison methods and that the distributed computing framework is efficient and scalable in clustering big datasets.
引用
收藏
页码:1142 / 1155
页数:14
相关论文
共 42 条
  • [1] Andrews G. R., 2002, FDN PARALLEL DISTRIB, V1st
  • [2] Cumulative voting consensus method for partitions with a variable number of clusters
    Ayad, Hanan G.
    Kamel, Mohamed S.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (01) : 160 - 173
  • [3] Scalable k-Means Clustering via Lightweight Coresets
    Bachem, Olivier
    Lucic, Mario
    Krause, Andreas
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1119 - 1127
  • [4] Calinski T., 1974, Communications in Statistics, V3, P1, DOI DOI 10.1080/03610927408827101
  • [5] Elastic Differential Evolution for Automatic Data Clustering
    Chen, Jun-Xian
    Gong, Yue-Jiao
    Chen, Wei-Neng
    Li, Mengting
    Zhang, Jun
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (08) : 4134 - 4147
  • [6] ON CORESETS FOR k-MEDIAN AND k-MEANS CLUSTERING IN METRIC AND EUCLIDEAN SPACES AND THEIR APPLICATIONS
    Chen, Ke
    [J]. SIAM JOURNAL ON COMPUTING, 2009, 39 (03) : 923 - 947
  • [7] CLUSTER SEPARATION MEASURE
    DAVIES, DL
    BOULDIN, DW
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1979, 1 (02) : 224 - 227
  • [8] Weighted cluster ensembles: Methods and analysis
    Domeniconi, Carlotta
    Al-Razgan, Muna
    [J]. ACM Transactions on Knowledge Discovery from Data, 2009, 2 (04)
  • [9] kluster: An Efficient Scalable Procedure for Approximating the Number of Clusters in Unsupervised Learning
    Estiri, Hossein
    Omran, Behzad Abounia
    Murphy, Shawn N.
    [J]. BIG DATA RESEARCH, 2018, 13 : 38 - 51
  • [10] Selection of the number of clusters via the bootstrap method
    Fang, Yixin
    Wang, Junhui
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2012, 56 (03) : 468 - 477