Leveraging Transformer-based autoencoders for low-rank multi-view subspace clustering

被引:0
|
作者
Lin, Yuxiu [1 ,2 ]
Liu, Hui [1 ,2 ]
Yu, Xiao [1 ,2 ]
Zhang, Caiming [2 ,3 ]
机构
[1] Shandong Univ Finance & Econ, Sch Comp Sci & Technol, Jinan 250014, Peoples R China
[2] Shandong Key Lab Lightweight Intelligent Comp & Vi, Jinan 250014, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250101, Peoples R China
关键词
Multi-view representation learning; Subspace clustering; Transformer; Weighted schatten p-norm;
D O I
10.1016/j.patcog.2024.111331
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep multi-view subspace clustering is a hot research topic, aiming to integrate multi-view information to produce accurate cluster prediction. Limited by the inherent heterogeneity of distinct views, existing works primarily rely on view-specific encoding structures for representation learning. Although effective to some extent, this approach may hinder the full exploitation of view information and increase the complexity of model training. To this end, this paper proposes a novel low-rank multi-view subspace clustering method, TALMSC, backed by Transformer-based autoencoders. Specifically, we extend the self-attention mechanism to multi-view clustering settings, developing multiple Transformer-based autoencoders that allow for modality- agnostic representation learning. Based on extracted latent representations, we deploy a sample-wise weighted fusion module that incorporates contrastive learning and orthogonal operators to formulate both consistency and diversity, consequently generating a comprehensive joint representation. Moreover, TALMSC involves a highly flexible low-rank regularizer under the weighted Schatten p-norm to constrain self-expression and better explore the low-rank structure. Extensive experiments on five multi-view datasets show that our method enjoys superior clustering performance over state-of-the-art methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Facilitated low-rank multi-view subspace clustering
    Zhang, Guang-Yu
    Huang, Dong
    Wang, Chang-Dong
    KNOWLEDGE-BASED SYSTEMS, 2023, 260
  • [2] Deep Low-Rank Multi-View Subspace Clustering
    Yan J.
    Li Z.
    Tang Q.
    Zhou Z.
    Li, Zhongyu, 1600, Xi'an Jiaotong University (55): : 125 - 135
  • [3] Multi-view low-rank sparse subspace clustering
    Brbic, Maria
    Kopriva, Ivica
    PATTERN RECOGNITION, 2018, 73 : 247 - 258
  • [4] Latent Low-Rank Sparse Multi-view Subspace Clustering
    Zhang Z.
    Cao R.
    Li C.
    Cheng S.
    Li, Chen (lynnlc@126.com), 1600, Science Press (33): : 344 - 352
  • [5] INCOMPLETE MULTI-VIEW SUBSPACE CLUSTERING WITH LOW-RANK TENSOR
    Liu, Jianlun
    Teng, Shaohua
    Zhang, Wei
    Fang, Xiaozhao
    Fei, Lunke
    Zhang, Zhuxiu
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3180 - 3184
  • [6] Deep low-rank subspace ensemble for multi-view clustering
    Xue, Zhe
    Du, Junping
    Du, Dawei
    Lyu, Siwei
    INFORMATION SCIENCES, 2019, 482 : 210 - 227
  • [7] Anchor Graph Based Low-Rank Incomplete Multi-View Subspace Clustering
    Liu, Xiaolan
    Shi, Zongyu
    Ye, Zehui
    Liang, Yong
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2022, 50 (12): : 60 - 70
  • [8] Constrained Low-Rank Tensor Learning for Multi-View Subspace Clustering
    Zhang, Tao
    Wang, Bo
    Zhang, Huanhuan
    Zhao, Yu
    2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI, 2022, : 49 - 54
  • [9] Mixed structure low-rank representation for multi-view subspace clustering
    Shouhang Wang
    Yong Wang
    Guifu Lu
    Wenge Le
    Applied Intelligence, 2023, 53 : 18470 - 18487
  • [10] Weighted Low-Rank Tensor Representation for Multi-View Subspace Clustering
    Wang, Shuqin
    Chen, Yongyong
    Zheng, Fangying
    FRONTIERS IN PHYSICS, 2021, 8