Strengthening incomplete multi-view clustering: An attention contrastive learning method

被引:0
|
作者
Hou, Shudong [1 ]
Guo, Lanlan [1 ]
Wei, Xu [1 ]
机构
[1] Anhui Univ Technol, Sch Comp Sci & Technol, Maanshan 243002, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Incomplete multi-view clustering; Cross-view encoder; Contrastive learning; High confidence; Graph constraint;
D O I
10.1016/j.imavis.2025.105493
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Incomplete multi-view clustering presents greater challenges than traditional multi-view clustering. In recent years, significant progress has been made in this field, multi-view clustering relies on the consistency and integrity of views to ensure the accurate transmission of data information. However, during the process of data collection and transmission, data loss is inevitable, leading to partial view loss and increasing the difficulty of joint learning on incomplete multi-view data. To address this issue, we propose a multi-view contrastive learning framework based on the attention mechanism. Previous contrastive learning mainly focused on the relationships between isolated sample pairs, which limited the robustness of the method. Our method selects positive samples from both global and local perspectives by utilizing the nearest neighbor graph to maximize the correlation between local features and latent features of each view. Additionally, we use a cross-view encoder network with self-attention structure to fuse the low dimensional representations of each view into a joint representation, and guide the learning of the joint representation through a high confidence structure. Furthermore, we introduce graph constraint learning to explore potential neighbor relationships among instances to facilitate data reconstruction. The experimental results on six multi-view datasets demonstrate that our method exhibits significant effectiveness and superiority compared to existing methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning
    Lin, Yijie
    Gou, Yuanbiao
    Liu, Xiaotian
    Bai, Jinfeng
    Lv, Jiancheng
    Peng, Xi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4447 - 4461
  • [22] Incomplete Multi-view Clustering
    Gao, Hang
    Peng, Yuxing
    Jian, Songlei
    INTELLIGENT INFORMATION PROCESSING VIII, 2016, 486 : 245 - 255
  • [23] Dual Completion Learning for Incomplete Multi-View Clustering
    Shen, Qiangqiang
    Zhang, Xuanqi
    Wang, Shuqin
    Li, Yuanman
    Liang, Yongsheng
    Chen, Yongyong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 455 - 467
  • [24] Local structure learning for incomplete multi-view clustering
    Yongchun Wang
    Youlong Yang
    Tong Ning
    Applied Intelligence, 2024, 54 : 3308 - 3324
  • [25] Consensus Graph Learning for Incomplete Multi-view Clustering
    Zhou, Wei
    Wang, Hao
    Yang, Yan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT I, 2019, 11439 : 529 - 540
  • [26] Local structure learning for incomplete multi-view clustering
    Wang, Yongchun
    Yang, Youlong
    Ning, Tong
    APPLIED INTELLIGENCE, 2024, 54 (04) : 3308 - 3324
  • [27] Prototype Matching Learning for Incomplete Multi-View Clustering
    Yuan, Honglin
    Sun, Yuan
    Zhou, Fei
    Wen, Jing
    Yuan, Shihua
    You, Xiaojian
    Ren, Zhenwen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 828 - 841
  • [28] Composite attention mechanism network for deep contrastive multi-view clustering
    Du, Tingting
    Zheng, Wei
    Xu, Xingang
    NEURAL NETWORKS, 2024, 176
  • [29] Multi-view Contrastive Graph Clustering
    Pan, Erlin
    Kang, Zhao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [30] Multi-level Feature Learning for Contrastive Multi-view Clustering
    Xu, Jie
    Tang, Huayi
    Ren, Yazhou
    Peng, Liang
    Zhu, Xiaofeng
    He, Lifang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16030 - 16039