Structured AutoEncoders for Subspace Clustering

被引:315
作者
Peng, Xi [1 ]
Feng, Jiashi [2 ]
Xiao, Shijie [3 ]
Yau, Wei-Yun [4 ]
Zhou, Joey Tianyi [5 ]
Yang, Songfan [6 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Sichuan, Peoples R China
[2] Natl Univ Singapore, Dept ECE, Singapore 119077, Singapore
[3] 3OmniVis Technol Singapore Pte Ltd, Singapore 609935, Singapore
[4] ASTAR, Inst Infocomm Res, Singapore 138632, Singapore
[5] ASTAR, Inst High Performance Comp, Singapore 138632, Singapore
[6] Sichuan Univ, Coll Elect & Informat Engn, TAL Educ Grp, AI Lab, Chengdu 610065, Sichuan, Peoples R China
关键词
Unsupervised deep learning; locality preservation; globality preservation; spectral clustering; DIMENSIONALITY REDUCTION;
D O I
10.1109/TIP.2018.2848470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing subspace clustering methods typically employ shallow models to estimate underlying subspaces of unlabeled data points and cluster them into corresponding groups. However, due to the limited representative capacity of the employed shallow models, those methods may fail in handling realistic data without the linear subspace structure. To address this issue, we propose a novel subspace clustering approach by introducing a new deep model-Structured AutoEncoder (StructAE). The StructAE learns a set of explicit transformations to progressively map input data points into nonlinear latent spaces while preserving the local and global subspace structure. In particular, to preserve local structure, the StructAE learns representations for each data point by minimizing reconstruction error with respect to itself. To preserve global structure, the StructAE incorporates a prior structured information by encouraging the learned representation to preserve specified reconstruction patterns over the entire data set. To the best of our knowledge, StructAE is one of the first deep subspace clustering approaches. Extensive experiments show that the proposed StructAE significantly outperforms 15 state-of-the-art subspace clustering approaches in terms of five evaluation metrics.
引用
收藏
页码:5076 / 5086
页数:11
相关论文
共 57 条
[41]  
Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
[42]  
Sivic J, 2009, PROC CVPR IEEE, P1145, DOI 10.1109/CVPRW.2009.5206513
[43]  
Tian F., P 28 AAAI C ART INT, P1293
[44]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[45]   Subspace Clustering [J].
Vidal, Rene .
IEEE SIGNAL PROCESSING MAGAZINE, 2011, 28 (02) :52-68
[46]  
Wolf L, 2010, LECT NOTES COMPUT SC, V5995, P88
[47]   Robust Kernel Low-Rank Representation [J].
Xiao, Shijie ;
Tan, Mingkui ;
Xu, Dong ;
Dong, Zhao Yang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (11) :2268-2281
[48]  
Xie JY, 2016, PR MACH LEARN RES, V48
[49]   FAST l1-MINIMIZATION ALGORITHMS AND AN APPLICATION IN ROBUST FACE RECOGNITION: A REVIEW [J].
Yang, Allen Y. ;
Sastry, S. Shankar ;
Ganesh, Arvind ;
Ma, Yi .
2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2010, :1849-1852
[50]   Joint Unsupervised Learning of Deep Representations and Image Clusters [J].
Yang, Jianwei ;
Parikh, Devi ;
Batra, Dhruv .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5147-5156