Approximate Clustering Ensemble Method for Big Data

被引:13
作者
Mahmud, Mohammad Sultan [1 ,2 ]
Huang, Joshua Zhexue [1 ,2 ]
Ruby, Rukhsana [3 ]
Ngueilbaye, Alladoumbaye [1 ,2 ]
Wu, Kaishun [1 ,2 ]
机构
[1] Shenzhen Univ, Natl Engn Lab Big Data Syst Comp Technol, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Big Data Inst, Shenzhen 518060, Peoples R China
[3] Guangdong Lab Artificial Intelligence & Digital Ec, Shenzhen 518107, Peoples R China
基金
中国国家自然科学基金;
关键词
Clustering approximation method; clustering ensemble; consensus functions; distributed clustering; RSP data model; K-MEANS; I-NICE; NUMBER; ALGORITHM; CONSENSUS; MODELS;
D O I
10.1109/TBDATA.2023.3255003
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Clustering a big distributed dataset of hundred gigabytes or more is a challenging task in distributed computing. A popular method to tackle this problem is to use a random sample of the big dataset to compute an approximate result as an estimation of the true result computed from the entire dataset. In this paper, instead of using a single random sample, we use multiple random samples to compute an ensemble result as the estimation of the true result of the big dataset. We propose a distributed computing framework to compute the ensemble result. In this framework, a big dataset is represented in the RSP data model as random sample data blocks managed in a distributed file system. To compute the ensemble clustering result, a set of RSP data blocks is randomly selected as random samples and clustered independently in parallel on the nodes of a cluster to generate the component clustering results. The component results are transferred to the master node, which computes the ensemble result. Since the random samples are disjoint and traditional consensus functions cannot be used, we propose two new methods to integrate the component clustering results into the final ensemble result. The first method uses component cluster centers to build a graph and the METIS algorithm to cut the graph into subgraphs, from which a set of candidate cluster centers is found. A hierarchical clustering method is then used to generate the final set of k cluster centers. The second method uses the clustering-by-passing-messages method to generate the final set of k cluster centers. Finally, the k-means algorithm was used to allocate the entire dataset into k clusters. Experiments were conducted on both synthetic and real-world datasets. The results show that the new ensemble clustering methods performed better than the comparison methods and that the distributed computing framework is efficient and scalable in clustering big datasets.
引用
收藏
页码:1142 / 1155
页数:14
相关论文
共 42 条
  • [21] Efficient biased sampling for approximate clustering and outlier detection in large data sets
    Kollios, G
    Gunopulos, D
    Koudas, N
    Berchtold, S
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2003, 15 (05) : 1170 - 1187
  • [22] Shared-nearest-neighbor-based clustering by fast search and find of density peaks
    Liu, Rui
    Wang, Hong
    Yu, Xiaomei
    [J]. INFORMATION SCIENCES, 2018, 450 : 200 - 226
  • [23] A dynamic split-and-merge approach for evolving cluster models
    Lughofer, Edwin
    [J]. EVOLVING SYSTEMS, 2012, 3 (03) : 135 - 151
  • [24] A Survey of Data Partitioning and Sampling Methods to Support Big Data Analysis
    Mahmud, Mohammad Sultan
    Huang, Joshua Zhexue
    Salloum, Salman
    Emara, Tamer Z.
    Sadatdiynov, Kuanishbay
    [J]. BIG DATA MINING AND ANALYTICS, 2020, 3 (02) : 85 - 101
  • [25] I-nice: A new approach for identifying the number of clusters and initial cluster centres
    Masud, Md Abdul
    Huang, Joshua Zhexue
    Wei, Chenghao
    Wang, Jikui
    Khan, Imran
    Zhong, Ming
    [J]. INFORMATION SCIENCES, 2018, 466 : 129 - 151
  • [26] An Evolutionary Algorithm with Crossover and Mutation for Model-Based Clustering
    McNicholas, Sharon M.
    McNicholas, Paul D.
    Ashlock, Daniel A.
    [J]. JOURNAL OF CLASSIFICATION, 2021, 38 (02) : 264 - 279
  • [27] Finite mixture models and model-based clusteringFinite mixture models and model-based clustering
    Melnykov, Volodymyr
    Maitra, Ranjan
    [J]. STATISTICS SURVEYS, 2010, 4 : 80 - 116
  • [28] Combining multiple clusterings using similarity graph
    Mimaroglu, Selim
    Erdil, Ertunc
    [J]. PATTERN RECOGNITION, 2011, 44 (03) : 694 - 703
  • [29] Rojas JAR, 2017, SYMP LARG DATA ANAL, P26, DOI 10.1109/LDAV.2017.8231848
  • [30] Random Sample Partition: A Distributed Data Model for Big Data Analysis
    Salloum, Salman
    Huan, Joshua Zhexue
    He, Yulin
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (11) : 5846 - 5854