Information theory-guided heuristic progressive multi-view coding

被引:2
作者
Li, Jiangmeng [1 ,2 ]
Gao, Hang [1 ,2 ]
Qiang, Wenwen [1 ,2 ]
Zheng, Changwen [1 ]
机构
[1] Chinese Acad Sci, Inst Software, Sci & Technol Integrated Informat Syst Lab, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
关键词
Self-supervised learning; Representation learning; Multi-view; Wasserstein distance; Information theory; DEEP NETWORK;
D O I
10.1016/j.neunet.2023.08.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-view representation learning aims to capture comprehensive information from multiple views of a shared context. Recent works intuitively apply contrastive learning to different views in a pairwise manner, which is still scalable: view-specific noise is not filtered in learning view-shared representations; the fake negative pairs, where the negative terms are actually within the same class as the positive, and the real negative pairs are coequally treated; evenly measuring the similarities between terms might interfere with optimization. Importantly, few works study the theoretical framework of generalized self-supervised multi-view learning, especially for more than two views. To this end, we rethink the existing multi-view learning paradigm from the perspective of information theory and then propose a novel information theoretical framework for generalized multi-view learning. Guided by it, we build a multi-view coding method with a three-tier progressive architecture, namely Information theory-guided heuristic Progressive Multi-view Coding (IPMC). In the distribution-tier, IPMC aligns the distribution between views to reduce view-specific noise. In the set-tier, IPMC constructs self-adjusted contrasting pools, which are adaptively modified by a view filter. Lastly, in the instance-tier, we adopt a designed unified loss to learn representations and reduce the gradient interference. Theoretically and empirically, we demonstrate the superiority of IPMC over state-of-the-art methods.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:415 / 432
页数:18
相关论文
共 94 条
  • [1] Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
  • [2] Achille A., 2017, Emergence of invariance and disentangling in deep representations
  • [3] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [4] Arora S, 2019, PR MACH LEARN RES, V97
  • [5] Bachman Philip, 2019, NeurIPS 2019
  • [6] Bao Hangbo, 2022, 10 INT C LEARN REPR
  • [7] Bardes A., 2022, ADV NEUR IN, V35, P8799
  • [8] Belghazi Ishmael, 2018, MINE: mutual information neural estimation
  • [9] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828
  • [10] Bojanowski P, 2017, Arxiv, DOI arXiv:1704.05310