Semantically consistent multi-view representation learning

被引:10
|
作者
Zhou, Yiyang [1 ]
Zheng, Qinghai [2 ]
Bai, Shunshun [1 ]
Zhu, Jihua [1 ]
机构
[1] Jiaotong Univ, Sch Software Engn, Xian 710049, Peoples R China
[2] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
关键词
Multi-view representation learning; Contrastive learning; Semantic consensus information;
D O I
10.1016/j.knosys.2023.110899
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we devote ourselves to the challenging task of Unsupervised Multi-view Representation Learning (UMRL), which requires learning a unified feature representation from multiple views in an unsupervised manner. Existing UMRL methods mainly focus on the learning process within the feature space while ignoring the valuable semantic information hidden in different views. To address this issue, we propose a novel approach called Semantically Consistent Multi-view Representation Learning (SCMRL), which aims to excavate underlying multi-view semantic consensus information and utilize it to guide the unified feature representation learning process. Specifically, SCMRL consists of a within view reconstruction module and a unified feature representation learning module. These modules are elegantly integrated using a contrastive learning strategy, which serves to align the semantic labels of both view-specific feature representations and the learned unified feature representation simultaneously. This integration allows SCMRL to effectively leverage consensus information in the semantic space, thereby constraining the learning process of the unified feature representation. Compared with several state-of-the-art algorithms, extensive experiments demonstrate its superiority. Our code is released on https://github.com/YiyangZhou/SCMRL.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Decoupled representation for multi-view learning
    Sun, Shiding
    Wang, Bo
    Tian, Yingjie
    PATTERN RECOGNITION, 2024, 151
  • [2] Comprehensive Multi-view Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    Tian, Zhiqiang
    Li, Chen
    INFORMATION FUSION, 2023, 89 : 198 - 209
  • [3] A Survey of Multi-View Representation Learning
    Li, Yingming
    Yang, Ming
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (10) : 1863 - 1883
  • [4] Tensorized Multi-view Subspace Representation Learning
    Changqing Zhang
    Huazhu Fu
    Jing Wang
    Wen Li
    Xiaochun Cao
    Qinghua Hu
    International Journal of Computer Vision, 2020, 128 : 2344 - 2361
  • [5] Collaborative Unsupervised Multi-View Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4202 - 4210
  • [6] Tensorized Multi-view Subspace Representation Learning
    Zhang, Changqing
    Fu, Huazhu
    Wang, Jing
    Li, Wen
    Cao, Xiaochun
    Hu, Qinghua
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (8-9) : 2344 - 2361
  • [7] Multi-view representation learning with dual-label collaborative guidance
    Chen, Bin
    Ren, Xiaojin
    Bai, Shunshun
    Chen, Ziyuan
    Zheng, Qinghai
    Zhu, Jihua
    KNOWLEDGE-BASED SYSTEMS, 2024, 305
  • [8] Instance-wise multi-view representation learning
    Li, Dan
    Wang, Haibao
    Wang, Yufeng
    Wang, Shengpei
    INFORMATION FUSION, 2023, 91 : 612 - 622
  • [9] A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning
    Ke, Guanzhou
    Chao, Guoqing
    Wang, Xiaoli
    Xu, Chenyang
    Zhu, Yongqi
    Yu, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2056 - 2069
  • [10] Learning unsupervised node representation from multi-view network
    Wang, Chen
    Chen, Xiaojun
    Chen, Bingkun
    Nie, Feiping
    Wang, Bo
    Ming, Zhong
    INFORMATION SCIENCES, 2021, 579 : 700 - 716