Strengthening incomplete multi-view clustering: An attention contrastive learning method

被引:0
|
作者
Hou, Shudong [1 ]
Guo, Lanlan [1 ]
Wei, Xu [1 ]
机构
[1] Anhui Univ Technol, Sch Comp Sci & Technol, Maanshan 243002, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Incomplete multi-view clustering; Cross-view encoder; Contrastive learning; High confidence; Graph constraint;
D O I
10.1016/j.imavis.2025.105493
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Incomplete multi-view clustering presents greater challenges than traditional multi-view clustering. In recent years, significant progress has been made in this field, multi-view clustering relies on the consistency and integrity of views to ensure the accurate transmission of data information. However, during the process of data collection and transmission, data loss is inevitable, leading to partial view loss and increasing the difficulty of joint learning on incomplete multi-view data. To address this issue, we propose a multi-view contrastive learning framework based on the attention mechanism. Previous contrastive learning mainly focused on the relationships between isolated sample pairs, which limited the robustness of the method. Our method selects positive samples from both global and local perspectives by utilizing the nearest neighbor graph to maximize the correlation between local features and latent features of each view. Additionally, we use a cross-view encoder network with self-attention structure to fuse the low dimensional representations of each view into a joint representation, and guide the learning of the joint representation through a high confidence structure. Furthermore, we introduce graph constraint learning to explore potential neighbor relationships among instances to facilitate data reconstruction. The experimental results on six multi-view datasets demonstrate that our method exhibits significant effectiveness and superiority compared to existing methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] DCMVC: Dual contrastive multi-view clustering
    Li, Pengyuan
    Chang, Dongxia
    Kong, Zisen
    Wang, Yiming
    Zhao, Yao
    NEUROCOMPUTING, 2025, 635
  • [32] Graph Contrastive Partial Multi-View Clustering
    Wang, Yiming
    Chang, Dongxia
    Fu, Zhiqiang
    Wen, Jie
    Zhao, Yao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 (6551-6562) : 6551 - 6562
  • [33] Contrastive Multi-View Learning for 3D Shape Clustering
    Peng, Bo
    Lin, Guoting
    Lei, Jianjun
    Qin, Tianyi
    Cao, Xiaochun
    Ling, Nam
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6262 - 6272
  • [34] Contrastive Multi-View Kernel Learning
    Liu, Jiyuan
    Liu, Xinwang
    Yang, Yuexiang
    Liao, Qing
    Xia, Yuanqing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9552 - 9566
  • [35] View-Driven Multi-View Clustering via Contrastive Double-Learning
    Liu, Shengcheng
    Zhu, Changming
    Li, Zishi
    Yang, Zhiyuan
    Gu, Wenjie
    ENTROPY, 2024, 26 (06)
  • [36] Tensor-based consensus learning for incomplete multi-view clustering
    Mu, Jinshuai
    Song, Peng
    Yu, Yanwei
    Zheng, Wenming
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 234
  • [37] Incomplete Multi-View Clustering With Paired and Balanced Dynamic Anchor Learning
    Li, Xingfeng
    Pan, Yuangang
    Sun, Yuan
    Sun, Quansen
    Sun, Yinghui
    Tsang, Ivor W.
    Ren, Zhenwen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1486 - 1497
  • [38] Consensus Learning with Complete Graph Regularization for Incomplete Multi-view Clustering
    Zhang, Jie
    Fei, Lunke
    Teng, Shaohua
    Zhu, Qinghua
    Imad, Rida
    Wen, Jie
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 1485 - 1492
  • [39] Localized Sparse Incomplete Multi-View Clustering
    Liu, Chengliang
    Wu, Zhihao
    Wen, Jie
    Xu, Yong
    Huang, Chao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5539 - 5551
  • [40] Tensorized topological graph learning for generalized incomplete multi-view clustering
    Zhang, Zheng
    He, Wen-Jue
    INFORMATION FUSION, 2023, 100