Nonparametric Clustering-Guided Cross-View Contrastive Learning for Partially View-Aligned Representation Learning

被引:0
|
作者
Qian, Shengsheng [1 ,2 ]
Xue, Dizhan [1 ,2 ]
Hu, Jun [3 ]
Zhang, Huaiwen [4 ]
Xu, Changsheng [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China
[3] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
[4] Inner Mongolia Univ, Coll Comp Sci, Hohhot 010021, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Representation learning; Contrastive learning; Semantics; Costs; Robustness; Gaussian mixture model; Faces; Data models; Data collection; Data augmentation; Partially view-aligned representation learning; multi-view representation learning; contrastive learning; nonparametric clustering; DIRICHLET; INFERENCE;
D O I
10.1109/TIP.2024.3480701
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the increasing availability of multi-view data, multi-view representation learning has emerged as a prominent research area. However, collecting strictly view-aligned data is usually expensive, and learning from both aligned and unaligned data can be more practicable. Therefore, Partially View-aligned Representation Learning (PVRL) has recently attracted increasing attention. After aligning multi-view representations based on their semantic similarity, the aligned representations can be utilized to facilitate downstream tasks, such as clustering. However, existing methods may be constrained by the following limitations: 1) They learn semantic relations across views using the known correspondences, which is incomplete and the existence of false negative pairs (FNP) can significantly impact the learning effectiveness; 2) Existing strategies for alleviating the impact of FNP are too intuitive and lack a theoretical explanation of their applicable conditions; 3) They attempt to find FNP based on distance in the common space and fail to explore semantic relations between multi-view data. In this paper, we propose a Nonparametric Clustering-guided Cross-view Contrastive Learning ((NCL)-L-3) for PVRL, in order to address the above issues. Firstly, we propose to estimate the similarity matrix between multi-view data in the marginal cross-view contrastive loss to approximate the similarity matrix of supervised contrastive learning (CL). Secondly, we establish the theoretical foundation for our proposed method by analyzing the error bounds of the loss function and its derivatives between our method and supervised CL. Thirdly, we propose a Deep Variational Nonparametric Clustering (DeepVNC) by designing a deep reparameterized variational inference for Dirichlet process Gaussian mixture models to construct cluster-level similarity between multi-view data and discover FNP. Additionally, we propose a reparameterization trick to improve the robustness and the performance of our proposed CL method. Extensive experiments on four widely used benchmark datasets show the superiority of our proposed method compared with state-of-the-art methods.
引用
收藏
页码:6158 / 6172
页数:15
相关论文
共 50 条
  • [21] Cross-View Representation Learning: A Superior ContextIB Method for Logo Classification
    Wang, Jing
    Zheng, Yuanjie
    Han, Zeyu
    Lv, Mei
    Hou, Sujuan
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 536 - 540
  • [22] Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System
    Zou, Ding
    Wei, Wei
    Mao, Xian-Ling
    Wang, Ziyang
    Qiu, Minghui
    Zhu, Feida
    Cao, Xin
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1358 - 1368
  • [23] Dual contrastive learning for multi-view clustering
    Bao, Yichen
    Zhao, Wenhui
    Zhao, Qin
    Gao, Quanxue
    Yang, Ming
    NEUROCOMPUTING, 2024, 599
  • [24] Cross-view contrastive representation learning approach to predicting DTIs via integrating multi-source information
    He, Chengxin
    Qu, Yuening
    Yin, Jin
    Zhao, Zhenjiang
    Ma, Runze
    Duan, Lei
    METHODS, 2023, 218 : 176 - 188
  • [25] CStrCRL: Cross-View Contrastive Learning Through Gated GCN With Strong Augmentations for Skeleton Recognition
    Hu, Ruotong
    Wang, Xianzhi
    Chang, Xiaojun
    Zhang, Yongle
    Hu, Yeqi
    Liu, Xinyuan
    Yu, Shusong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 6674 - 6685
  • [26] Deep Contrastive Multi-View Subspace Clustering With Representation and Cluster Interactive Learning
    Yu, Xuejiao
    Jiang, Yi
    Chao, Guoqing
    Chu, Dianhui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (01) : 188 - 199
  • [27] Structure-guided feature and cluster contrastive learning for multi-view clustering
    Shu, Zhenqiu
    Li, Bin
    Mao, Cunli
    Gao, Shengxiang
    Yu, Zhengtao
    NEUROCOMPUTING, 2024, 582
  • [28] Learning from the global view: Supervised contrastive learning of multimodal representation
    Mai, Sijie
    Zeng, Ying
    Hu, Haifeng
    INFORMATION FUSION, 2023, 100
  • [29] Cross-view Contrastive Learning Enhanced Heterogeneous Graph Networks for Multi-modal Recipe Recommendation
    She, Xiaolong
    Zhou, Jingya
    Hu, Zhenyu
    Du, Boyu
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 467 - 482
  • [30] Dual-Channel Adaptive Scale Hypergraph Encoders With Cross-View Contrastive Learning for Knowledge Tracing
    Li, Jiawei
    Deng, Yuanfei
    Qin, Yixiu
    Mao, Shun
    Jiang, Yuncheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 15