Multi-scale attention and loss penalty mechanism for multi-view clustering

被引:1
作者
Wang, Tingyu [1 ,2 ]
Zhai, Rui [1 ,2 ]
Wang, Longge [1 ,2 ]
Yu, Junyang [1 ,2 ]
Li, Han [1 ,2 ]
Wang, Zhicheng [1 ,2 ]
Wu, Jinhu [1 ,2 ]
机构
[1] Henan Univ, Coll Software, Kaifeng 475004, Peoples R China
[2] Henan Prov Intelligent Data Proc Res Engn Res Ctr, Kaifeng 475004, Peoples R China
关键词
Multi-view clustering; Contrast learning; Loss penalty mechanism; Attention mechanism;
D O I
10.1007/s00530-024-01637-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep multi-view clustering effectively utilizes multidimensional perspective data, categorizing sample entities into their respective categories. Nonetheless, prevalent methodologies frequently exhibit inefficiency during the feature fusion phase, particularly in isolating pivotal features conducive to clustering. To address this problem, this paper proposes a multi-view clustering method based on multi-scale attention and loss penalty mechanism (MALPMVC). The MALPMVC method begins by utilizing an autoencoder to extract latent feature representations, then employs multi-scale attention to enhance the salience of channel features and spatial areas, thereby intensifying focus on significant feature channels and spatial areas. The loss penalty mechanism is then used to focus the model on hard-to-classify samples, improving the ability to learn discriminative features from hard-to-categorize samples. Finally, the obtained fused features are inputted into the data clustering module to divide the data samples into clusters. Extensive experiments have shown that the MALPMVC method surpasses 10 other competitive clustering approaches, such as CoMVC, MFLVC, and GCFAggMVC, in delivering superior performance. Furthermore, with an increase in the number of views, the model effectively counteracts the adverse influences of mutually exclusive views, successfully mitigating the detrimental effects associated with these conflicts. Particularly, in the Caltech-4V and Caltech-5V datasets, it outperforms the GCFAggMVC method by an impressive 12.36% and 9.21% in clustering accuracy, respectively.
引用
收藏
页数:15
相关论文
共 51 条
  • [1] CSMDC: Exploring consistently context semantics for multi-view document clustering
    Bai, Ruina
    Huang, Ruizhang
    Xu, Le
    Qin, Yongbin
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [2] Deep multi-view document clustering with enhanced semantic embedding
    Bai, Ruina
    Huang, Ruizhang
    Chen, Yanping
    Qin, Yongbin
    [J]. INFORMATION SCIENCES, 2021, 564 : 273 - 287
  • [3] The landscape of microbial phenotypic traits and associated genes
    Brbic, Maria
    Piskorec, Matija
    Vidulin, Vedrana
    Krisko, Anita
    Smuc, Tomislav
    Supek, Fran
    [J]. NUCLEIC ACIDS RESEARCH, 2016, 44 (21) : 10074 - 10090
  • [4] Cai E., 2021, P 2021 4 INT C ALG C, P1, DOI [10.1145/3508546.3508618, DOI 10.1145/3508546.3508618]
  • [5] Cai X., 2013, P 23 INT JOINT C ART, P2598
  • [6] Chao Guoqing, 2021, IEEE Trans Artif Intell, V2, P146, DOI 10.1109/tai.2021.3065894
  • [7] Representation Learning in Multi-view Clustering: A Literature Review
    Chen, Man-Sheng
    Lin, Jia-Qi
    Li, Xiang-Long
    Liu, Bao-Yu
    Wang, Chang-Dong
    Huang, Dong
    Lai, Jian-Huang
    [J]. DATA SCIENCE AND ENGINEERING, 2022, 7 (03) : 225 - 241
  • [8] Relaxed multi-view clustering in latent embedding space
    Chen, Man-Sheng
    Huang, Ling
    Wang, Chang-Dong
    Huang, Dong
    Lai, Jian-Huang
    [J]. INFORMATION FUSION, 2021, 68 : 8 - 21
  • [9] Chen MS, 2020, AAAI CONF ARTIF INTE, V34, P3513
  • [10] Adaptive-weighted deep multi-view clustering with uniform scale representation
    Chen, Rui
    Tang, Yongqiang
    Zhang, Wensheng
    Feng, Wenlong
    [J]. NEURAL NETWORKS, 2024, 171 : 114 - 126