A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning

被引:19
|
作者
Ke, Guanzhou [1 ]
Chao, Guoqing [2 ]
Wang, Xiaoli [3 ]
Xu, Chenyang [4 ]
Zhu, Yongqi [1 ]
Yu, Yang [1 ]
机构
[1] Beijing Jiaotong Univ, Inst Data Sci & Intelligent Decis Support, Beijing Inst Big Data Res, Beijing 100080, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Weihai 264209, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210000, Peoples R China
[4] Wuyi Univ, Fac Intelligent Mfg, Jiangmen 529000, Peoples R China
关键词
Task analysis; Semantics; Robustness; Representation learning; Image reconstruction; Data models; Learning systems; Multi-view representation learning; contrastive learning; fusion; clustering; incomplete view; ENHANCEMENT;
D O I
10.1109/TCSVT.2023.3300319
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multi-view representation learning aims to extract comprehensive information from multiple sources. It has achieved significant success in applications such as video understanding and 3D rendering. However, how to improve the robustness and generalization of multi-view representations from unsupervised and incomplete scenarios remains an open question in this field. In this study, we discovered a positive correlation between the semantic distance of multi-view representations and the tolerance for data corruption. Moreover, we found that the information ratio of consistency and complementarity significantly impacts the performance of discriminative and generative tasks related to multi-view representations. Based on these observations, we propose an end-to-end CLustering-guided cOntrastiVE fusioN (CLOVEN) method, which enhances the robustness and generalization of multi-view representations simultaneously. To balance consistency and complementarity, we design an asymmetric contrastive fusion module. The module first combines all view-specific representations into a comprehensive representation through a scaling fusion layer. Then, the information of the comprehensive representation and view-specific representations is aligned via contrastive learning loss function, resulting in a view-common representation that includes both consistent and complementary information. We prevent the module from learning suboptimal solutions by not allowing information alignment between view-specific representations. We design a clustering-guided module that encourages the aggregation of semantically similar views. This action reduces the semantic distance of the view-common representation. We quantitatively and qualitatively evaluate CLOVEN on five datasets, demonstrating its superiority over 13 other competitive multi-view learning methods in terms of clustering and classification performance. In the data-corrupted scenario, our proposed method resists noise interference better than competitors. Additionally, the visualization demonstrates that CLOVEN succeeds in preserving the intrinsic structure of view-specific representations and improves the compactness of view-common representations. Our code can be found at https://github.com/guanzhou-ke/cloven.
引用
收藏
页码:2056 / 2069
页数:14
相关论文
共 50 条
  • [41] Multi-View Fusion with Extreme Learning Machine for Clustering
    Zhang, Yongshan
    Wu, Jia
    Zhou, Chuan
    Cai, Zhihua
    Yang, Jian
    Yu, Philip S.
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2019, 10 (05)
  • [42] CCR-Net: Consistent contrastive representation network for multi-view clustering
    Lin, Renjie
    Lin, Yongkun
    Lin, Zhenghong
    Du, Shide
    Wang, Shiping
    INFORMATION SCIENCES, 2023, 637
  • [43] Strengthening incomplete multi-view clustering: An attention contrastive learning method
    Hou, Shudong
    Guo, Lanlan
    Wei, Xu
    IMAGE AND VISION COMPUTING, 2025, 157
  • [44] Subspace-Contrastive Multi-View Clustering
    Fu, Lele
    Huang, Sheng
    Zhang, Lei
    Yang, Jinghua
    Zheng, Zibin
    Zhang, Chuanfu
    Chen, Chuan
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (09)
  • [45] Contrastive Multi-View Learning for 3D Shape Clustering
    Peng, Bo
    Lin, Guoting
    Lei, Jianjun
    Qin, Tianyi
    Cao, Xiaochun
    Ling, Nam
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6262 - 6272
  • [46] Joint contrastive triple-learning for deep multi-view clustering
    Hu, Shizhe
    Zou, Guoliang
    Zhang, Chaoyang
    Lou, Zhengzheng
    Geng, Ruilin
    Ye, Yangdong
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [47] DCMVC: Dual contrastive multi-view clustering
    Li, Pengyuan
    Chang, Dongxia
    Kong, Zisen
    Wang, Yiming
    Zhao, Yao
    NEUROCOMPUTING, 2025, 635
  • [48] Robust Contrastive Multi-view Kernel Clustering
    Sul, Peng
    Li, Yixi
    Li, Shujian
    Huang, Shudong
    Lv, Jiancheng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4938 - 4945
  • [49] Dual-dimensional contrastive learning for incomplete multi-view clustering
    Zhu, Zhengzhong
    Pu, Chujun
    Zhang, Xuejie
    Wang, Jin
    Zhou, Xiaobing
    NEUROCOMPUTING, 2025, 615
  • [50] Progressive Neighbor-masked Contrastive Learning for Fusion-style Deep Multi-view Clustering
    Liu, Mingyang
    Yang, Zuyuan
    Han, Wei
    Xie, Shengli
    NEURAL NETWORKS, 2024, 179