What makes for uniformity for non-contrastive self-supervised learning?

被引:1
|
作者
Wang YinQuan [1 ,2 ]
Zhang XiaoPeng [3 ]
Tian Qi [3 ]
Lu JinHu [4 ]
机构
[1] Acad Math & Syst Sci, Chinese Acad Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Huawei Inc, Shenzhen 518128, Peoples R China
[4] Beihang Univ, Sch Automat Sci & Elect Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
contrastive learning; self-supervised learning; representation; uniformity; dynamics;
D O I
10.1007/s11431-021-2041-7
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Recent advances in self-supervised learning (SSL) have made remarkable progress, especially for contrastive methods that target pulling two augmented views of one image together and pushing the views of all other images away. In this setting, negative pairs play a key role in avoiding collapsed representation. Recent studies, such as those on bootstrap your own latent (BYOL) and SimSiam, have surprisingly achieved a comparable performance even without contrasting negative samples. However, a basic theoretical issue for SSL arises: how can different SSL methods avoid collapsed representation, and is there a common design principle? In this study, we look deep into current non-contrastive SSL methods and analyze the key factors that avoid collapses. To achieve this goal, we present a new indicator of uniformity metric and study the local dynamics of the indicator to diagnose collapses in different scenarios. Moreover, we present some principles for choosing a good predictor, such that we can explicitly control the optimization process. Our theoretical analysis result is validated on some widely used benchmarks spanning different-scale datasets. We also compare recent SSL methods and analyze their commonalities in avoiding collapses and some ideas for future algorithm designs.
引用
收藏
页码:2399 / 2408
页数:10
相关论文
共 50 条
  • [1] What makes for uniformity for non-contrastive self-supervised learning?
    YinQuan Wang
    XiaoPeng Zhang
    Qi Tian
    JinHu Lü
    Science China Technological Sciences, 2022, 65 : 2399 - 2408
  • [2] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [3] Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations
    Cho, Jaejin
    Pappagari, Raghavendra
    Zelasko, Piotr
    Velazquez, Laureano Moro
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 4028 - 4032
  • [4] Non-Contrastive Self-Supervised Learning for Utterance-Level Information Extraction From Speech
    Cho, Jaejin
    Villalba, Jesus
    Moro-Velazquez, Laureano
    Dehak, Najim
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1284 - 1295
  • [5] C3-DINO: Joint Contrastive and Non-Contrastive Self-Supervised Learning for Speaker Verification
    Zhang, Chunlei
    Yu, Dong
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1273 - 1283
  • [6] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [7] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [8] DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
    Nguyen, Thanh
    Pham, Trung Xuan
    Zhang, Chaoning
    Luu, Tung M.
    Vu, Thang
    Yoo, Chang D.
    IEEE ACCESS, 2023, 11 : 21534 - 21545
  • [9] CONTRASTIVE HEARTBEATS: CONTRASTIVE LEARNING FOR SELF-SUPERVISED ECG REPRESENTATION AND PHENOTYPING
    Wei, Crystal T.
    Hsieh, Ming-En
    Liu, Chien-Liang
    Tseng, Vincent S.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1126 - 1130
  • [10] CONTRASTIVE SELF-SUPERVISED LEARNING FOR WIRELESS POWER CONTROL
    Naderializadeh, Navid
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4965 - 4969