Does Self-Supervised Pretraining Really Match ImageNet Weights?

被引:0
|
作者
Pototzky, Daniel [1 ,2 ]
Sultan, Azhar [1 ]
Schmidt-Thieme, Lars [2 ]
机构
[1] Robert Bosch GmbH, Hildesheim, Germany
[2] Univ Hildesheim, Hildesheim, Germany
来源
2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP) | 2022年
关键词
D O I
10.1109/IVMSP54334.2022.9816238
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Self-supervised pretraining methods for computer vision learn transferable representations given a large number of unlabeled images. Several methods from the field match or even outperform ImageNet weights when finetuning on downstream tasks, creating the impression that self-supervised weights are superior. We challenge this believe and show that state-of-the-art self-supervised methods match ImageNet weights either in classification or object detection but not in both. Furthermore, we demonstrate in experiments on image classification, object detection, instance segmentation and keypoint detection that using a more sophisticated supervised training protocol can greatly improve upon ImageNet weights and at least match and usually outperform state-of-the-art self-supervised methods.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [2] INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING
    Chen, Zhehuai
    Zhang, Yu
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Wang, Gary
    Moreno, Pedro
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 251 - 258
  • [3] On Pretraining Data Diversity for Self-Supervised Learning
    Hammoud, Hasan Abed Al Kader
    Das, Tuhin
    Pizzati, Fabio
    Torre, Philip H. S.
    Bibi, Adel
    Ghanem, Bernard
    COMPUTER VISION - ECCV 2024, PT LVI, 2025, 15114 : 54 - 71
  • [4] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [5] Instance Localization for Self-supervised Detection Pretraining
    Yang, Ceyuan
    Wu, Zhirong
    Zhou, Bolei
    Lin, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3986 - 3995
  • [6] A Masked Self-Supervised Pretraining Method for Face Parsing
    Li, Zhuang
    Cao, Leilei
    Wang, Hongbin
    Xu, Lihong
    MATHEMATICS, 2022, 10 (12)
  • [7] Progressive Self-Supervised Pretraining for Hyperspectral Image Classification
    Guan, Peiyan
    Lam, Edmund Y.
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 13
  • [8] Heuristic Attention Representation Learning for Self-Supervised Pretraining
    Van Nhiem Tran
    Liu, Shen-Hsuan
    Li, Yung-Hui
    Wang, Jia-Ching
    SENSORS, 2022, 22 (14)
  • [9] Self-supervised Pretraining Isolated Forest for Outlier Detection
    Liang, Dong
    Wang, Jun
    Gao, Xiaoyu
    Wang, Jiahui
    Zhao, Xiaoyong
    Wang, Lei
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 306 - 310
  • [10] Self-Supervised Pretraining with DICOM metadata in Ultrasound Imaging
    Hu, Szu-Yeu
    Wang, Shuhang
    Weng, Wei-Hung
    Wang, JingChao
    Wang, XiaoHong
    Ozturk, Arinc
    Li, Qian
    Kumar, Viksit
    Samir, Anthony E.
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 126, 2020, 126 : 732 - 748