A novel self-attention deep subspace clustering

被引:7
|
作者
Chen, Zhengfan [1 ]
Ding, Shifei [1 ,2 ]
Hou, Haiwei [1 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Jiangsu, Peoples R China
[2] Minist Educ People S Republ China, Mine Digitizat Engn Res Ctr, Xuzhou 221116, Jiangsu, Peoples R China
关键词
Deep subspace clustering; Convolutional autoencoder; Self-attention; REPRESENTATIONS;
D O I
10.1007/s13042-021-01318-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most of the existing deep subspace clustering methods leverage convolutional autoencoders to obtain feature representation for non-linear data points. These methods commonly adopt the structure of a few convolutional layers because stacking many convolutional layers may cause computationally inefficient and optimization difficulties. However, long-range dependencies can hardly be captured when convolutional operations are not repeated enough, thus affect the quality of feature extraction which the performance of deep subspace clustering method highly lies in. To deal with this issue, we propose a novel self-attention deep subspace clustering (SADSC) model, which learns more favorable data representations by introducing self-attention mechanisms into convolutional autoencoders. Specifically, SADSC leverages three convolutional layers and add the self-attention layers after the first and third ones in encoders, then decoders have symmetric structures. The self-attention layers maintain the variable input sizes and can be easily combined with different convolutional layers in autoencoder. Experimental results on the handwritten recognition, face and object clustering datasets demonstrate the advantages of SADSC over the state-of-the-art deep subspace clustering models.
引用
收藏
页码:2377 / 2387
页数:11
相关论文
共 50 条
  • [1] A novel self-attention deep subspace clustering
    Zhengfan Chen
    Shifei Ding
    Haiwei Hou
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 2377 - 2387
  • [2] Self-attention Adversarial Based Deep Subspace Clustering
    Yin M.
    Wu H.-Y.
    Xie S.-L.
    Yang Q.-Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 271 - 281
  • [3] A Multiscale Self-Attention Deep Clustering for Change Detection in SAR Images
    Dong, Huihui
    Ma, Wenping
    Jiao, Licheng
    Liu, Fang
    Li, LingLing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [4] Deep Structure and Attention Aware Subspace Clustering
    Wu, Wenhao
    Wang, Weiwei
    Kong, Shengjiang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 139 - 150
  • [5] Deep Clustering Efficient Learning Network for Motion Recognition Based on Self-Attention Mechanism
    Ru, Tielin
    Zhu, Ziheng
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [6] Deep Semantic Role Labeling with Self-Attention
    Tan, Zhixing
    Wang, Mingxuan
    Xie, Jun
    Chen, Yidong
    Shi, Xiaodong
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4929 - 4936
  • [7] Deep CNNs With Self-Attention for Speaker Identification
    Nguyen Nang An
    Nguyen Quang Thanh
    Liu, Yanbing
    IEEE ACCESS, 2019, 7 : 85327 - 85337
  • [8] Compressed Self-Attention for Deep Metric Learning
    Chen, Ziye
    Gong, Mingming
    Xu, Yanwu
    Wang, Chaohui
    Zhang, Kun
    Du, Bo
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3561 - 3568
  • [9] Denoising adaptive deep clustering with self-attention mechanism on single-cell sequencing data
    Su, Yansen
    Lin, Rongxin
    Wang, Jing
    Tan, Dayu
    Zheng, Chunhou
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (02)
  • [10] SELF-ATTENTION GUIDED DEEP FEATURES FOR ACTION RECOGNITION
    Xiao, Renyi
    Hou, Yonghong
    Guo, Zihui
    Li, Chuankun
    Wang, Pichao
    Li, Wanqing
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1060 - 1065