Leveraging Transformer-based autoencoders for low-rank multi-view subspace clustering

被引:0
|
作者
Lin, Yuxiu [1 ,2 ]
Liu, Hui [1 ,2 ]
Yu, Xiao [1 ,2 ]
Zhang, Caiming [2 ,3 ]
机构
[1] Shandong Univ Finance & Econ, Sch Comp Sci & Technol, Jinan 250014, Peoples R China
[2] Shandong Key Lab Lightweight Intelligent Comp & Vi, Jinan 250014, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250101, Peoples R China
关键词
Multi-view representation learning; Subspace clustering; Transformer; Weighted schatten p-norm;
D O I
10.1016/j.patcog.2024.111331
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep multi-view subspace clustering is a hot research topic, aiming to integrate multi-view information to produce accurate cluster prediction. Limited by the inherent heterogeneity of distinct views, existing works primarily rely on view-specific encoding structures for representation learning. Although effective to some extent, this approach may hinder the full exploitation of view information and increase the complexity of model training. To this end, this paper proposes a novel low-rank multi-view subspace clustering method, TALMSC, backed by Transformer-based autoencoders. Specifically, we extend the self-attention mechanism to multi-view clustering settings, developing multiple Transformer-based autoencoders that allow for modality- agnostic representation learning. Based on extracted latent representations, we deploy a sample-wise weighted fusion module that incorporates contrastive learning and orthogonal operators to formulate both consistency and diversity, consequently generating a comprehensive joint representation. Moreover, TALMSC involves a highly flexible low-rank regularizer under the weighted Schatten p-norm to constrain self-expression and better explore the low-rank structure. Extensive experiments on five multi-view datasets show that our method enjoys superior clustering performance over state-of-the-art methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Facilitated low-rank multi-view subspace clustering
    Zhang, Guang-Yu
    Huang, Dong
    Wang, Chang-Dong
    KNOWLEDGE-BASED SYSTEMS, 2023, 260
  • [2] Deep Low-Rank Multi-View Subspace Clustering
    Yan J.
    Li Z.
    Tang Q.
    Zhou Z.
    Li, Zhongyu, 1600, Xi'an Jiaotong University (55): : 125 - 135
  • [3] Multi-view low-rank sparse subspace clustering
    Brbic, Maria
    Kopriva, Ivica
    PATTERN RECOGNITION, 2018, 73 : 247 - 258
  • [4] Latent Low-Rank Sparse Multi-view Subspace Clustering
    Zhang Z.
    Cao R.
    Li C.
    Cheng S.
    Li, Chen (lynnlc@126.com), 1600, Science Press (33): : 344 - 352
  • [5] INCOMPLETE MULTI-VIEW SUBSPACE CLUSTERING WITH LOW-RANK TENSOR
    Liu, Jianlun
    Teng, Shaohua
    Zhang, Wei
    Fang, Xiaozhao
    Fei, Lunke
    Zhang, Zhuxiu
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3180 - 3184
  • [6] Weighted Low-Rank Tensor Representation for Multi-View Subspace Clustering
    Wang, Shuqin
    Chen, Yongyong
    Zheng, Fangying
    FRONTIERS IN PHYSICS, 2021, 8
  • [7] Multiple kernel low-rank representation-based robust multi-view subspace clustering
    Zhang, Xiaoqian
    Ren, Zhenwen
    Sun, Huaijiang
    Bai, Keqiang
    Feng, Xinghua
    Liu, Zhigui
    INFORMATION SCIENCES, 2021, 551 : 324 - 340
  • [8] Multi-view Subspace Clustering via a Global Low-Rank Affinity Matrix
    Qi, Lei
    Shi, Yinghuan
    Wang, Huihui
    Yang, Wanqi
    Gao, Yang
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2016, 2016, 9937 : 321 - 331
  • [9] Low-rank tensor multi-view subspace clustering via cooperative regularization
    Guoqing Liu
    Hongwei Ge
    Shuzhi Su
    Shuangxi Wang
    Multimedia Tools and Applications, 2023, 82 : 38141 - 38164
  • [10] Generalized Nonconvex Low-Rank Tensor Approximation for Multi-View Subspace Clustering
    Chen, Yongyong
    Wang, Shuqin
    Peng, Chong
    Hua, Zhongyun
    Zhou, Yicong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4022 - 4035