What makes for uniformity for non-contrastive self-supervised learning?

被引:1
|
作者
Wang YinQuan [1 ,2 ]
Zhang XiaoPeng [3 ]
Tian Qi [3 ]
Lu JinHu [4 ]
机构
[1] Acad Math & Syst Sci, Chinese Acad Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Huawei Inc, Shenzhen 518128, Peoples R China
[4] Beihang Univ, Sch Automat Sci & Elect Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
contrastive learning; self-supervised learning; representation; uniformity; dynamics;
D O I
10.1007/s11431-021-2041-7
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Recent advances in self-supervised learning (SSL) have made remarkable progress, especially for contrastive methods that target pulling two augmented views of one image together and pushing the views of all other images away. In this setting, negative pairs play a key role in avoiding collapsed representation. Recent studies, such as those on bootstrap your own latent (BYOL) and SimSiam, have surprisingly achieved a comparable performance even without contrasting negative samples. However, a basic theoretical issue for SSL arises: how can different SSL methods avoid collapsed representation, and is there a common design principle? In this study, we look deep into current non-contrastive SSL methods and analyze the key factors that avoid collapses. To achieve this goal, we present a new indicator of uniformity metric and study the local dynamics of the indicator to diagnose collapses in different scenarios. Moreover, we present some principles for choosing a good predictor, such that we can explicitly control the optimization process. Our theoretical analysis result is validated on some widely used benchmarks spanning different-scale datasets. We also compare recent SSL methods and analyze their commonalities in avoiding collapses and some ideas for future algorithm designs.
引用
收藏
页码:2399 / 2408
页数:10
相关论文
共 50 条
  • [31] Contrastive Self-Supervised Learning With Smoothed Representation for Remote Sensing
    Jung, Heechul
    Oh, Yoonju
    Jeong, Seongho
    Lee, Chaehyeon
    Jeon, Taegyun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [32] Toward Graph Self-Supervised Learning With Contrastive Adjusted Zooming
    Zheng, Yizhen
    Li, Ming
    Pan, Shirui
    Li, Yuan-Fang
    Peng, Hao
    Li, Ming
    Li, Zhao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8882 - 8896
  • [33] Contrastive self-supervised learning: review, progress, challenges and future research directions
    Kumar, Pranjal
    Rawat, Piyush
    Chauhan, Siddhartha
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2022,
  • [34] Contrastive Self-supervised Representation Learning Using Synthetic Data
    Dong-Yu She
    Kun Xu
    International Journal of Automation and Computing, 2021, 18 : 556 - 567
  • [35] Contrastive Self-supervised Representation Learning Using Synthetic Data
    Dong-Yu She
    Kun Xu
    International Journal of Automation and Computing , 2021, (04) : 556 - 567
  • [36] SELF-SUPERVISED ACOUSTIC ANOMALY DETECTION VIA CONTRASTIVE LEARNING
    Hojjati, Hadi
    Armanfard, Narges
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3253 - 3257
  • [37] Enhancing robust VQA via contrastive and self-supervised learning
    Cao, Runlin
    Li, Zhixin
    Tang, Zhenjun
    Zhang, Canlong
    Ma, Huifang
    PATTERN RECOGNITION, 2025, 159
  • [38] Contrastive self-supervised learning: review, progress, challenges and future research directions
    Kumar, Pranjal
    Rawat, Piyush
    Chauhan, Siddhartha
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2022, 11 (04) : 461 - 488
  • [39] Self-Supervised Contrastive Learning for Volcanic Unrest Detection
    Bountos, Nikolaos Ioannis
    Papoutsis, Ioannis
    Michail, Dimitrios
    Anantrasirichai, Nantheera
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [40] CLSSATP: Contrastive learning and self-supervised learning model for aquatic toxicity prediction
    Lin, Ye
    Yang, Xin
    Zhang, Mingxuan
    Cheng, Jinyan
    Lin, Hai
    Zhao, Qi
    AQUATIC TOXICOLOGY, 2025, 279