Information theory-guided heuristic progressive multi-view coding

被引:2
作者
Li, Jiangmeng [1 ,2 ]
Gao, Hang [1 ,2 ]
Qiang, Wenwen [1 ,2 ]
Zheng, Changwen [1 ]
机构
[1] Chinese Acad Sci, Inst Software, Sci & Technol Integrated Informat Syst Lab, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
Self-supervised learning; Representation learning; Multi-view; Wasserstein distance; Information theory; DEEP NETWORK;
D O I
10.1016/j.neunet.2023.08.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-view representation learning aims to capture comprehensive information from multiple views of a shared context. Recent works intuitively apply contrastive learning to different views in a pairwise manner, which is still scalable: view-specific noise is not filtered in learning view-shared representations; the fake negative pairs, where the negative terms are actually within the same class as the positive, and the real negative pairs are coequally treated; evenly measuring the similarities between terms might interfere with optimization. Importantly, few works study the theoretical framework of generalized self-supervised multi-view learning, especially for more than two views. To this end, we rethink the existing multi-view learning paradigm from the perspective of information theory and then propose a novel information theoretical framework for generalized multi-view learning. Guided by it, we build a multi-view coding method with a three-tier progressive architecture, namely Information theory-guided heuristic Progressive Multi-view Coding (IPMC). In the distribution-tier, IPMC aligns the distribution between views to reduce view-specific noise. In the set-tier, IPMC constructs self-adjusted contrasting pools, which are adaptively modified by a view filter. Lastly, in the instance-tier, we adopt a designed unified loss to learn representations and reduce the gradient interference. Theoretically and empirically, we demonstrate the superiority of IPMC over state-of-the-art methods.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:415 / 432
页数:18
相关论文
共 94 条
  • [91] Zhang CQ, 2019, ADV NEUR IN, V32
  • [92] Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction
    Zhang, Richard
    Isola, Phillip
    Efros, Alexei A.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 645 - 654
  • [93] Zhao H, 2019, Arxiv, DOI arXiv:1901.09453
  • [94] A Comprehensive Survey on Transfer Learning
    Zhuang, Fuzhen
    Qi, Zhiyuan
    Duan, Keyu
    Xi, Dongbo
    Zhu, Yongchun
    Zhu, Hengshu
    Xiong, Hui
    He, Qing
    [J]. PROCEEDINGS OF THE IEEE, 2021, 109 (01) : 43 - 76