Similarity contrastive estimation for image and video soft contrastive self-supervised learning

被引:2
|
作者
Denize, Julien [1 ,2 ]
Rabarisoa, Jaonary [1 ]
Orcesi, Astrid [1 ]
Herault, Romain [2 ]
机构
[1] Univ Paris Saclay, CEA, List, F-91120 Palaiseau, France
[2] Normandie Univ, INSA Rouen, LITIS, F-76801 St Etienne Du Rouvray, France
关键词
Deep learning; Self-supervised learning; Contrastive; Representation;
D O I
10.1007/s00138-023-01444-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks. Source code is available here: https://github.com/juliendenize/eztorch.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Contrastive Self-Supervised Learning for Optical Music Recognition
    Penarrubia, Carlos
    Valero-Mas, Jose J.
    Calvo-Zaragoza, Jorge
    DOCUMENT ANALYSIS SYSTEMS, DAS 2024, 2024, 14994 : 312 - 326
  • [32] Memory Bank Clustering for Self-supervised Contrastive Learning
    Hao, Yiqing
    An, Gaoyun
    Ruan, Qiuqi
    IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS, IGTA 2021, 2021, 1480 : 132 - 144
  • [33] Self-supervised contrastive learning for implicit collaborative filtering
    Song, Shipeng
    Liu, Bin
    Teng, Fei
    Li, Tianrui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [34] Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation
    Kreuk, Felix
    Keshet, Joseph
    Adi, Yossi
    INTERSPEECH 2020, 2020, : 3700 - 3704
  • [35] SCL: Self-supervised contrastive learning for few-shot image classification
    Lim, Jit Yan
    Lim, Kian Ming
    Lee, Chin Poo
    Tan, Yong Xuan
    NEURAL NETWORKS, 2023, 165 : 19 - 30
  • [36] Robust image hashing for content identification through contrastive self-supervised learning
    Fonseca-Bustos, Jesus
    Alejandra Ramirez-Gutierrez, Kelsey
    Feregrino-Uribe, Claudia
    NEURAL NETWORKS, 2022, 156 : 81 - 94
  • [37] Deep Contrastive Self-Supervised Hashing for Remote Sensing Image Retrieval
    Tan, Xiaoyan
    Zou, Yun
    Guo, Ziyang
    Zhou, Ke
    Yuan, Qiangqiang
    REMOTE SENSING, 2022, 14 (15)
  • [38] CLSSATP: Contrastive learning and self-supervised learning model for aquatic toxicity prediction
    Lin, Ye
    Yang, Xin
    Zhang, Mingxuan
    Cheng, Jinyan
    Lin, Hai
    Zhao, Qi
    AQUATIC TOXICOLOGY, 2025, 279
  • [39] What makes for uniformity for non-contrastive self-supervised learning?
    YinQuan Wang
    XiaoPeng Zhang
    Qi Tian
    JinHu Lü
    Science China Technological Sciences, 2022, 65 : 2399 - 2408
  • [40] Self-Supervised Contrastive Learning for Medical Time Series: A Systematic Review
    Liu, Ziyu
    Alavi, Azadeh
    Li, Minyi
    Zhang, Xiang
    SENSORS, 2023, 23 (09)