Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss

被引:0
作者
HaoChen, Jeff Z. [1 ]
Wei, Colin [1 ]
Gaidon, Adrien [2 ]
Ma, Tengyu [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Toyota Res Inst, Toyota, Japan
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
关键词
APPROXIMATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited - prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pairs (i.e., data augmentations of the same image). Our work analyzes contrastive learning without assuming conditional independence of positive pairs using a novel concept of the augmentation graph on data. Edges in this graph connect augmentations of the same datapoint, and ground-truth classes naturally form connected sub-graphs. We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective on neural net representations. Minimizing this objective leads to features with provable accuracy guarantees under linear probe evaluation. By standard generalization bounds, these accuracy guarantees also hold when minimizing the training contrastive loss. Empirically, the features learned by our objective can match or outperform several strong baselines on benchmark vision datasets. In all, this work provides the first provable analysis for contrastive learning where guarantees for linear probe evaluation can apply to realistic empirical settings.
引用
收藏
页数:12
相关论文
共 52 条
  • [1] Abbe E., 2017, Community detection and stochastic block models: recent developments
  • [2] Arora S, 2019, PR MACH LEARN RES, V97
  • [3] Expander Flows, Geometric Embeddings and Graph Partitioning
    Arora, Sanjeev
    Rao, Satish
    Vazirani, Umesh
    [J]. JOURNAL OF THE ACM, 2009, 56 (02)
  • [4] Arvin AM, 2009, LIVE VARIOLA VIRUS: CONSIDERATIONS FOR CONTINUING RESEARCH, P9
  • [5] Bachman P, 2019, ADV NEUR IN, V32
  • [6] Bansal Y., 2020, ARXIV201008508
  • [7] Bardes Adrien, 2021, ARXIV210504906
  • [8] Bobkov SG, 1997, ANN PROBAB, V25, P206
  • [9] Bromley J., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P669, DOI 10.1142/S0218001493000339
  • [10] Cai T., 2021, ARXIV210211203