Leveraging Transformer-based autoencoders for low-rank multi-view subspace clustering

被引:0
|
作者
Lin, Yuxiu [1 ,2 ]
Liu, Hui [1 ,2 ]
Yu, Xiao [1 ,2 ]
Zhang, Caiming [2 ,3 ]
机构
[1] Shandong Univ Finance & Econ, Sch Comp Sci & Technol, Jinan 250014, Peoples R China
[2] Shandong Key Lab Lightweight Intelligent Comp & Vi, Jinan 250014, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250101, Peoples R China
关键词
Multi-view representation learning; Subspace clustering; Transformer; Weighted schatten p-norm;
D O I
10.1016/j.patcog.2024.111331
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep multi-view subspace clustering is a hot research topic, aiming to integrate multi-view information to produce accurate cluster prediction. Limited by the inherent heterogeneity of distinct views, existing works primarily rely on view-specific encoding structures for representation learning. Although effective to some extent, this approach may hinder the full exploitation of view information and increase the complexity of model training. To this end, this paper proposes a novel low-rank multi-view subspace clustering method, TALMSC, backed by Transformer-based autoencoders. Specifically, we extend the self-attention mechanism to multi-view clustering settings, developing multiple Transformer-based autoencoders that allow for modality- agnostic representation learning. Based on extracted latent representations, we deploy a sample-wise weighted fusion module that incorporates contrastive learning and orthogonal operators to formulate both consistency and diversity, consequently generating a comprehensive joint representation. Moreover, TALMSC involves a highly flexible low-rank regularizer under the weighted Schatten p-norm to constrain self-expression and better explore the low-rank structure. Extensive experiments on five multi-view datasets show that our method enjoys superior clustering performance over state-of-the-art methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Adaptive Multi-View Subspace Clustering
    Tang Q.
    Zhang Y.
    He S.
    Zhou Z.
    Zhang, Yulong, 1600, Xi'an Jiaotong University (55): : 102 - 112
  • [32] Partial Multi-view Subspace Clustering
    Xu, Nan
    Guo, Yanqing
    Zheng, Xin
    Wang, Qianyu
    Luo, Xiangyang
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1794 - 1801
  • [33] Selecting the Best Part From Multiple Laplacian Autoencoders for Multi-View Subspace Clustering
    Tang, Kewei
    Xu, Kaiqiang
    Jiang, Wei
    Su, Zhixun
    Sun, Xiyan
    Luo, Xiaonan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 7457 - 7469
  • [34] Multimodal sparse and low-rank subspace clustering
    Abavisani, Mahdi
    Patel, Vishal M.
    INFORMATION FUSION, 2018, 39 : 168 - 177
  • [35] Correlation Structured Low-Rank Subspace Clustering
    You, Huamin
    Li, Yubai
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 710 - 714
  • [36] Symmetric low-rank representation for subspace clustering
    Chen, Jie
    Zhang, Haixian
    Mao, Hua
    Sang, Yongsheng
    Yi, Zhang
    NEUROCOMPUTING, 2016, 173 : 1192 - 1202
  • [37] Multi-view Subspace Clustering Based on Unified Measure Standard
    Tang, Kewei
    Wang, Xiaoru
    Li, Jinhong
    NEURAL PROCESSING LETTERS, 2023, 55 (05) : 6231 - 6246
  • [38] Hierarchical bipartite graph based multi-view subspace clustering
    Zhou, Jie
    Nie, Feiping
    Luo, Xinglong
    He, Xingshi
    INFORMATION FUSION, 2025, 117
  • [39] Anchor-based scalable multi-view subspace clustering
    Zhou, Shibing
    Yang, Mingrui
    Wang, Xi
    Song, Wei
    INFORMATION SCIENCES, 2024, 666
  • [40] Multi-view Subspace Clustering Based on Unified Measure Standard
    Kewei Tang
    Xiaoru Wang
    Jinhong Li
    Neural Processing Letters, 2023, 55 : 6231 - 6246