A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning

被引:19
|
作者
Ke, Guanzhou [1 ]
Chao, Guoqing [2 ]
Wang, Xiaoli [3 ]
Xu, Chenyang [4 ]
Zhu, Yongqi [1 ]
Yu, Yang [1 ]
机构
[1] Beijing Jiaotong Univ, Inst Data Sci & Intelligent Decis Support, Beijing Inst Big Data Res, Beijing 100080, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Weihai 264209, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210000, Peoples R China
[4] Wuyi Univ, Fac Intelligent Mfg, Jiangmen 529000, Peoples R China
关键词
Task analysis; Semantics; Robustness; Representation learning; Image reconstruction; Data models; Learning systems; Multi-view representation learning; contrastive learning; fusion; clustering; incomplete view; ENHANCEMENT;
D O I
10.1109/TCSVT.2023.3300319
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multi-view representation learning aims to extract comprehensive information from multiple sources. It has achieved significant success in applications such as video understanding and 3D rendering. However, how to improve the robustness and generalization of multi-view representations from unsupervised and incomplete scenarios remains an open question in this field. In this study, we discovered a positive correlation between the semantic distance of multi-view representations and the tolerance for data corruption. Moreover, we found that the information ratio of consistency and complementarity significantly impacts the performance of discriminative and generative tasks related to multi-view representations. Based on these observations, we propose an end-to-end CLustering-guided cOntrastiVE fusioN (CLOVEN) method, which enhances the robustness and generalization of multi-view representations simultaneously. To balance consistency and complementarity, we design an asymmetric contrastive fusion module. The module first combines all view-specific representations into a comprehensive representation through a scaling fusion layer. Then, the information of the comprehensive representation and view-specific representations is aligned via contrastive learning loss function, resulting in a view-common representation that includes both consistent and complementary information. We prevent the module from learning suboptimal solutions by not allowing information alignment between view-specific representations. We design a clustering-guided module that encourages the aggregation of semantically similar views. This action reduces the semantic distance of the view-common representation. We quantitatively and qualitatively evaluate CLOVEN on five datasets, demonstrating its superiority over 13 other competitive multi-view learning methods in terms of clustering and classification performance. In the data-corrupted scenario, our proposed method resists noise interference better than competitors. Additionally, the visualization demonstrates that CLOVEN succeeds in preserving the intrinsic structure of view-specific representations and improves the compactness of view-common representations. Our code can be found at https://github.com/guanzhou-ke/cloven.
引用
收藏
页码:2056 / 2069
页数:14
相关论文
共 50 条
  • [1] Nonparametric Clustering-Guided Cross-View Contrastive Learning for Partially View-Aligned Representation Learning
    Qian, Shengsheng
    Xue, Dizhan
    Hu, Jun
    Zhang, Huaiwen
    Xu, Changsheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6158 - 6172
  • [2] Latent Representation Guided Multi-View Clustering
    Huang, Shudong
    Tsang, Ivor W. W.
    Xu, Zenglin
    Lv, Jiancheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 7082 - 7087
  • [3] Multi-view clustering with semantic fusion and contrastive learning
    Yu, Hui
    Bian, Hui-Xiang
    Chong, Zi-Ling
    Liu, Zun
    Shi, Jian-Yu
    NEUROCOMPUTING, 2024, 603
  • [4] Graph Contrastive Partial Multi-View Clustering
    Wang, Yiming
    Chang, Dongxia
    Fu, Zhiqiang
    Wen, Jie
    Zhao, Yao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 (6551-6562) : 6551 - 6562
  • [5] CONAN: Contrastive Fusion Networks for Multi-view Clustering
    Ke, Guanzhou
    Hong, Zhiyong
    Zeng, Zhiqiang
    Liu, Zeyi
    Sun, Yangjie
    Xie, Yannan
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 653 - 660
  • [6] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning
    Lin, Yijie
    Gou, Yuanbiao
    Liu, Xiaotian
    Bai, Jinfeng
    Lv, Jiancheng
    Peng, Xi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4447 - 4461
  • [7] Self-Weighted Contrastive Fusion for Deep Multi-View Clustering
    Wu, Song
    Zheng, Yan
    Ren, Yazhou
    He, Jing
    Pu, Xiaorong
    Huang, Shudong
    Hao, Zhifeng
    He, Lifang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9150 - 9162
  • [8] Graph Structure Aware Contrastive Multi-View Clustering
    Chen, Rui
    Tang, Yongqiang
    Cai, Xiangrui
    Yuan, Xiaojie
    Feng, Wenlong
    Zhang, Wensheng
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (03) : 260 - 274
  • [9] Selective Contrastive Learning for Unpaired Multi-View Clustering
    Xin, Like
    Yang, Wanqi
    Wang, Lei
    Yang, Ming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1749 - 1763
  • [10] Dual contrastive learning for multi-view clustering
    Bao, Yichen
    Zhao, Wenhui
    Zhao, Qin
    Gao, Quanxue
    Yang, Ming
    NEUROCOMPUTING, 2024, 599