Deep Clustering via Weighted k-Subspace Network

被引:13
|
作者
Huang, Weitian [1 ]
Yin, Ming [1 ]
Li, Jianzhong [1 ]
Xie, Shengli [1 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangdong Key Lab IoT Informat Proc, Guangzhou 510006, Guangdong, Peoples R China
基金
美国国家科学基金会;
关键词
Deep clustering; subspace clustering; weighted; autoencoder; NEURAL-NETWORKS;
D O I
10.1109/LSP.2019.2941368
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Subspace clustering aims to separate the data into clusters under the hypothesis that the samples within the same cluster will lie in the same low-dimensional subspace. Due to the tough pairwise constraints, k-subspace clustering is sensitive to outliers and initialization. In this letter, we present a novel deep architecture for k-subspace clustering to address this issue, called as Deep Weighted k-Subspace Clustering (DWSC). Specifically, our framework consists of autoencoder and weighted k-subsapce network. We first use the autoencoder to non-linearly compress the samples into the low-dimensional latent space. In the weighted k-subspace network, we feed the latent representation into the assignment network to output soft assignments which represent the probability of data belonging to the according subspace. Subsequently, the optimal k subspaces are identified by minimizing the projection residuals of the latent representations to all subspaces, using the learned soft assignments as a weighting vector. Finally, we jointly optimize the representation learning and clustering in a unified framework. Experimental results show that our approach outperforms the state-of-the-art subspace clustering methods on two benchmark datasets.
引用
收藏
页码:1628 / 1632
页数:5
相关论文
共 50 条
  • [41] Unsupervised Deep Learning for Subspace Clustering
    Sekmen, Ali
    Koku, Ahmet Bugra
    Parlaktuna, Mustafa
    Abdul-Malek, Ayad
    Vanamala, Nagendrababu
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2017, : 2089 - 2094
  • [42] Deep Multimodal Subspace Clustering Networks
    Abavisani, Mahdi
    Patel, Vishal M.
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (06) : 1601 - 1614
  • [43] Overcomplete Deep Subspace Clustering Networks
    Valanarasu, Jeya Maria Jose
    Patel, Vishal M.
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 746 - 755
  • [44] Duet Robust Deep Subspace Clustering
    Jiang, Yangbangyan
    Xu, Qianqian
    Yang, Zhiyong
    Cao, Xiaochun
    Huang, Qingming
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1596 - 1604
  • [45] Multiview Deep Subspace Clustering Networks
    Zhu, Pengfei
    Yao, Xinjie
    Wang, Yu
    Hui, Binyuan
    Du, Dawei
    Hu, Qinghua
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (07) : 4280 - 4293
  • [46] Triplet Deep Subspace Clustering via Self-Supervised Data Augmentation
    Zhang, Zhao
    Li, Xianzhen
    Zhang, Haijun
    Yang, Yi
    Yan, Shuicheng
    Wang, Meng
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 946 - 955
  • [47] Deep Bayesian Sparse Subspace Clustering
    Ye, Xulun
    Luo, Shuhui
    Chao, Jieyu
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1888 - 1892
  • [48] Efficient Deep Embedded Subspace Clustering
    Cai, Jinyu
    Fan, Jicong
    Guo, Wenzhong
    Wang, Shiping
    Zhang, Yunhe
    Zhang, Zhao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 21 - 30
  • [49] Cross-Modal Subspace Clustering via Deep Canonical Correlation Analysis
    Gao, Quanxue
    Lian, Huanhuan
    Wang, Qianqian
    Sun, Gan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3938 - 3945
  • [50] Projective Low-rank Subspace Clustering via Learning Deep Encoder
    Li, Jun
    Liu, Hongfu
    Zhao, Handong
    Fu, Yun
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2145 - 2151