Deep Contrastive Learning: A Survey

被引:0
|
作者
Zhang C.-S. [1 ]
Chen J. [1 ]
Li Q.-L. [1 ]
Deng B.-Q. [1 ]
Wang J. [1 ]
Chen C.-G. [1 ]
机构
[1] Henan Key Lab of Big Data Analysis and Processing, Henan University, Kaifeng
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2023年 / 49卷 / 01期
关键词
Contrastive learning; deep learning; feature extraction; metric learning; self-supervised learning;
D O I
10.16383/j.aas.c220421
中图分类号
学科分类号
摘要
In deep learning, it has been a crucial research concern on how to make use of the vast amount of unlabeled data to enhance the feature extraction capability of deep neural networks, for which contrastive learning is an effective approach. It has attracted significant research effort in the past few years, and a large number of contrastive learning methods have been proposed. In this paper, we survey recent advances and progress in contrastive learning in a comprehensive way. We first propose a new taxonomy for contrastive learning, in which we divide existing methods into 5 categories, including 1) sample pair construction methods, 2) image augmentation methods, 3) network architecture level methods, 4) loss function level methods, and 5) applications. Based on our proposed taxonomy, we systematically review the methods in each category, and analyze the characteristics and differences of representative methods. Moreover, we report and compare the performance of different contrastive learning methods on the benchmark datasets. We also retrospect the history of contrastive learning and discuss the differences and connections among contrastive learning, self-supervised learning, and metric learning. Finally, we discuss remaining issues and challenges in contrastive learning and outlook its future directions. © 2023 Science Press. All rights reserved.
引用
收藏
页码:15 / 39
页数:24
相关论文
共 109 条
  • [71] Yeh C H, Hong C Y, Hsu Y C, Liu T L, Chen Y B, LeCun Y., Decoupled contrastive learning, Proceedings of the 17th European Conference, (2021)
  • [72] Jing L, Vincent P, LeCun Y, Et al., Understanding dimensional collapse in contrastive self-supervised learning, Proceedings of the 10th International Conference on Learning Representations, (2022)
  • [73] Zhang Z L, Sabuncu M R., Generalized cross entropy loss for training deep neural networks with noisy labels, Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8792-8802, (2018)
  • [74] Weinberger K Q, Saul L K., Distance metric learning for large margin nearest neighbor classification, The Journal of Machine Learning Research, 10, pp. 207-244, (2009)
  • [75] Sohn K., Improved deep metric learning with multi-class n-pair loss objective, Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 1857-1865, (2016)
  • [76] Shah A, Sra S, Chellappa R, Cherian A., Max-margin contrastive learning, Proceedings of the AAAI Conference on Artificial Intelligence, 36, 8, pp. 8220-8230, (2022)
  • [77] Wang P, Han K, Wei X S, Zhang L, Wang L., Contrastive learning based hybrid networks for long-tailed image classification, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 943-952, (2021)
  • [78] Li X M, Shi D Q, Diao X L, Xu H., SCL-MLNet: Boosting few-shot remote sensing scene classification via self-supervised contrastive learning, IEEE Transactions on Geoscience and Remote Sensing, 60, (2021)
  • [79] Li J N, Xiong C M, Hoi S C H., CoMatch: Semi-supervised learning with contrastive graph regularization, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9455-9464, (2021)
  • [80] Park T, Efros A A, Zhang R, Zhu J Y., Contrastive learning for unpaired image-to-image translation, Proceedings of the 16th European Conference on Computer Vision (ECCV), pp. 319-345, (2020)