Accelerating Infinite Ensemble of Clustering by Pivot Features

被引:0
作者
Xiao-Bo Jin
Guo-Sen Xie
Kaizhu Huang
Amir Hussain
机构
[1] Henan University of Technology,College of Information Science and Engineering
[2] Inception Institute of Artificial Intelligence (IIAI),College of Information Science and Engineering
[3] Henan University of Science and Technology,Department of Electrical & Electronic Engineering
[4] Xi’an Jiaotong-Liverpool University,Division of Computing Science & Maths, School of Natural Sciences
[5] University of Stirling,undefined
来源
Cognitive Computation | 2018年 / 10卷
关键词
Ensemble clustering; Infinite ensemble clustering; Pivot features; Reconstruction of features;
D O I
暂无
中图分类号
学科分类号
摘要
The infinite ensemble clustering (IEC) incorporates both ensemble clustering and representation learning by fusing infinite basic partitions and shows appealing performance in the unsupervised context. However, it needs to solve the linear equation system with the high time complexity in proportion to O(d3) where d is the concatenated dimension of many clustering results. Inspired by the cognitive characteristic of human memory that can pay attention to the pivot features in a more compressed data space, we propose an acceleration version of IEC (AIEC) by extracting the pivot features and learning the multiple mappings to reconstruct them, where the linear equation system can be solved with the time complexity O(dr2) (r ≪ d). Experimental results on the standard datasets including image and text ones show that our algorithm AIEC improves the running time of IEC greatly but achieves the comparable clustering performance.
引用
收藏
页码:1042 / 1050
页数:8
相关论文
共 50 条
  • [1] Filipovych R(2011)Semi-supervised cluster analysis of imaging data NeuroImage 54 2185-2197
  • [2] Resnick SM(2002)Why so many clustering algorithms: a position paper SIGKDD Explor Newsl 4 65-75
  • [3] Davatzikos C(2017)SCE: A manifold regularized set-covering method for data partitioning IEEE Trans Neural Netw Learn Syst PP 1-14
  • [4] Estivill-Castro V(1996)Bagging predictors Mach Learn 24 123-140
  • [5] Li X(2013)Representation learning: a review and new perspectives IEEE Trans Pattern Anal Mach Intell 35 1798-1828
  • [6] Lu Q(2006)A fast learning algorithm for deep belief nets Neural Comput 18 1527-1554
  • [7] Dong Y(2012)Supervised learning and codebook optimization for bag-of-words models Cognitive Comput 4 409-419
  • [8] Tao D(2015)Twitter sentiment analysis for large-scale data: an unsupervised approach Cognitive Comput 7 254-262
  • [9] Breiman L(2015)Combination of multiple bipartite ranking for multipartite web content quality evaluation Neurocomputing 149 1305-1314
  • [10] Bengio Y(2016)An adaptive density data stream clustering algorithm Cognitive Comput 8 30-38