Fast Discrete Distribution Clustering Using Wasserstein Barycenter With Sparse Support

被引:74
作者
Ye, Jianbo [1 ]
Wu, Panruo [2 ]
Wang, James Z. [1 ]
Li, Jia [3 ]
机构
[1] Penn State Univ, Coll Informat Sci & Technol, University Pk, PA 16802 USA
[2] Univ Calif Riverside, Dept Comp Sci & Engn, Riverside, CA 92521 USA
[3] Penn State Univ, Dept Stat, University Pk, PA 16802 USA
基金
美国国家科学基金会;
关键词
Discrete distribution; K-means; clustering; large-scale learning; parallel computing; ADMM; EARTH MOVERS DISTANCE;
D O I
10.1109/TSP.2017.2659647
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In a variety of research areas, the weighted bag of vectors and the histogram are widely used descriptors for complex objects. Both can be expressed as discrete distributions. D2-clustering pursues the minimum total within-cluster variation for a set of discrete distributions subject to the Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue, the bottleneck being the computation of a centroid distribution, called Wasserstein barycenter, that minimizes its sum of squared distances to the cluster members. In this paper, we develop a modified Bregman ADMM approach for computing the approximate discrete Wasserstein barycenter of large clusters. In the case when the support points of the barycenters are unknown and have low cardinality, our method achieves high accuracy empirically at a much reduced computational cost. The strengths and weaknesses of our method and its alternatives are examined through experiments, and we recommend scenarios for their respective usage. Moreover, we develop both serial and parallelized versions of the algorithm. By experimenting with large-scale data, we demonstrate the computational efficiency of the new methods and investigate their convergence properties and numerical stability. The clustering results obtained on several datasets in different domains are highly competitive in comparison with some widely used methods in the corresponding areas.
引用
收藏
页码:2317 / 2332
页数:16
相关论文
共 50 条
  • [21] Training support vector machine using adaptive clustering
    Boley, D
    Cao, DW
    PROCEEDINGS OF THE FOURTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING, 2004, : 126 - 137
  • [22] An Accurate Clustering Algorithm for Fast Protein-Profiling using SCICA on MALDI-TOF
    Acharyya, Amit
    Neehar, Mavuduru
    Naik, Ganesh R.
    2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2015, : 69 - 72
  • [23] K-Means Clustering With Decision Support System using SAW Determining Thesis Topic
    Daniati, Erna
    Nugroho, Arie
    2016 6TH IEEE INTERNATIONAL CONFERENCE ON CONTROL SYSTEM, COMPUTING AND ENGINEERING (ICCSCE), 2016, : 326 - 331
  • [24] Elective Recommendation Support through K-Means Clustering using R-Tool
    Agnivesh
    Pandey, Rajiv
    2015 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMMUNICATION NETWORKS (CICN), 2015, : 851 - 856
  • [25] Performance Analysis of WSN Clustering Algorithms using Discrete Power Control
    Aslam, Nauman
    Robertson, William
    Phillips, William
    IPSI BGD TRANSACTIONS ON INTERNET RESEARCH, 2009, 5 (01): : 10 - 15
  • [26] Fast, Linear Time Hierarchical Clustering using the Baire Metric
    Contreras, Pedro
    Murtagh, Fionn
    JOURNAL OF CLASSIFICATION, 2012, 29 (02) : 118 - 143
  • [27] Voronoi Cell-Based Clustering Using a Kernel Support
    Kim, Kyoungok
    Son, Youngdoo
    Lee, Jaewook
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2015, 27 (04) : 1146 - 1156
  • [28] Histogram-based fast and robust image clustering using stochastic fractal search and morphological reconstruction
    Das, Arunita
    Dhal, Krishna Gopal
    Ray, Swarnajit
    Galvez, Jorge
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (06) : 4531 - 4554
  • [30] Semi-Unsupervised Learning: Clustering and Classifying using Ultra-Sparse Labels
    Willetts, Matthew
    Roberts, Stephen
    Holmes, Chris
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 5286 - 5295