A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning

被引:28
作者
Ke, Guanzhou [1 ]
Chao, Guoqing [2 ]
Wang, Xiaoli [3 ]
Xu, Chenyang [4 ]
Zhu, Yongqi [1 ]
Yu, Yang [1 ]
机构
[1] Beijing Jiaotong Univ, Inst Data Sci & Intelligent Decis Support, Beijing Inst Big Data Res, Beijing 100080, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Weihai 264209, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210000, Peoples R China
[4] Wuyi Univ, Fac Intelligent Mfg, Jiangmen 529000, Peoples R China
关键词
Task analysis; Semantics; Robustness; Representation learning; Image reconstruction; Data models; Learning systems; Multi-view representation learning; contrastive learning; fusion; clustering; incomplete view; ENHANCEMENT;
D O I
10.1109/TCSVT.2023.3300319
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multi-view representation learning aims to extract comprehensive information from multiple sources. It has achieved significant success in applications such as video understanding and 3D rendering. However, how to improve the robustness and generalization of multi-view representations from unsupervised and incomplete scenarios remains an open question in this field. In this study, we discovered a positive correlation between the semantic distance of multi-view representations and the tolerance for data corruption. Moreover, we found that the information ratio of consistency and complementarity significantly impacts the performance of discriminative and generative tasks related to multi-view representations. Based on these observations, we propose an end-to-end CLustering-guided cOntrastiVE fusioN (CLOVEN) method, which enhances the robustness and generalization of multi-view representations simultaneously. To balance consistency and complementarity, we design an asymmetric contrastive fusion module. The module first combines all view-specific representations into a comprehensive representation through a scaling fusion layer. Then, the information of the comprehensive representation and view-specific representations is aligned via contrastive learning loss function, resulting in a view-common representation that includes both consistent and complementary information. We prevent the module from learning suboptimal solutions by not allowing information alignment between view-specific representations. We design a clustering-guided module that encourages the aggregation of semantically similar views. This action reduces the semantic distance of the view-common representation. We quantitatively and qualitatively evaluate CLOVEN on five datasets, demonstrating its superiority over 13 other competitive multi-view learning methods in terms of clustering and classification performance. In the data-corrupted scenario, our proposed method resists noise interference better than competitors. Additionally, the visualization demonstrates that CLOVEN succeeds in preserving the intrinsic structure of view-specific representations and improves the compactness of view-common representations. Our code can be found at https://github.com/guanzhou-ke/cloven.
引用
收藏
页码:2056 / 2069
页数:14
相关论文
共 69 条
[1]   Deep Multimodal Subspace Clustering Networks [J].
Abavisani, Mahdi ;
Patel, Vishal M. .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (06) :1601-1614
[2]   Face image super-resolution using 2D CCA [J].
An, Le ;
Bhanu, Bir .
SIGNAL PROCESSING, 2014, 103 :184-194
[3]  
Andrew G., 2013, ICML
[4]   Point-Based Multi-View Stereo Network [J].
Chen, Rui ;
Han, Songfang ;
Xu, Jing ;
Su, Hao .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1538-1547
[5]  
Chen Ting, 2019, ICML
[6]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[7]  
Deng X., 2023, Neural Processing Letters, P1
[8]   Strongly augmented contrastive clustering [J].
Deng, Xiaozhi ;
Huang, Dong ;
Chen, Ding-Hua ;
Wang, Chang-Dong ;
Lai, Jian-Huang .
PATTERN RECOGNITION, 2023, 139
[9]  
Federici M., 2020, P INT C LEARN REPR
[10]  
Fei-Fei L, 2005, PROC CVPR IEEE, P524