The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models

被引:38
作者
Chen, Tianlong [1 ]
Frankle, Jonathan [2 ]
Chang, Shiyu [3 ]
Liu, Sijia [3 ,4 ]
Zhang, Yang [3 ]
Carbin, Michael [2 ]
Wang, Zhangyang [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] MIT CSAIL, Cambridge, MA USA
[3] MIT IBM Watson AI Lab, Cambridge, MA USA
[4] Michigan State Univ, E Lansing, MI 48824 USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR46437.2021.01604
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The computer vision world has been re-gaining enthusiasm in various pre-trained models, including both classical ImageNet supervised pre-training and recently emerged self-supervised pre-training such as simCLR [10] and MoCo [40]. Pre-trained weights often boost a wide range of downstream tasks including classification, detection, and segmentation. Latest studies suggest that pre-training benefits from gigantic model capacity [11]. We are hereby curious and ask: after pre-training, does a pre-trained model indeed have to stay large for its downstream transferability? In this paper, we examine supervised and self-supervised pre-trained models through the lens of the lottery ticket hypothesis (LTH) [31]. LTH identifies highly sparse matching subnetworks that can be trained in isolation from (nearly) scratch yet still reach the full models' performance. We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance. Our extensive experiments convey an overall positive message: from all pre-trained weights obtained by ImageNet classification, simCLR, and MoCo, we are consistently able to locate such matching subnetworks at 59.04% to 96.48% sparsity that transfer universally to multiple downstream tasks, whose performance see no degradation compared to using full pre-trained weights. Further analyses reveal that subnetworks found from different pre-training tend to yield diverse mask structures and perturbation sensitivities. We conclude that the core LTH observations remain generally relevant in the pre-training paradigm of computer vision, but more delicate discussions are needed in some cases. Codes and pre-trained models will be made available at: https://github.com/VITA-Group/CV_LTH_Pre-training.
引用
收藏
页码:16301 / 16311
页数:11
相关论文
共 87 条
  • [1] [Anonymous], 2019, INT C LEARN REPR, DOI DOI 10.1080/09593985.2019.1709234
  • [2] [Anonymous], 2019, P EUR C COMP VIS ECC, DOI DOI 10.1007/S13143-018-0064-5
  • [3] Bachman P, 2019, ADV NEUR IN, V32
  • [4] Bengio Y., 2006, P ADV NEUR INF PROC, V19, P153, DOI [DOI 10.7551/MITPRESS/7503.003.0024, DOI 10.5555/2976456.2976476]
  • [5] Bochkovskiy A., 2020, ARXIV, DOI DOI 10.48550/ARXIV.2004.10934
  • [6] Cai Han, 2020, ARXIV200711622
  • [7] Unsupervised Pre-Training of Image Features on Non-Curated Data
    Caron, Mathilde
    Bojanowski, Piotr
    Mairal, Julien
    Joulin, Armand
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2959 - 2968
  • [8] Deep Clustering for Unsupervised Learning of Visual Features
    Caron, Mathilde
    Bojanowski, Piotr
    Joulin, Armand
    Douze, Matthijs
    [J]. COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 139 - 156
  • [9] Chen T., 2020, P 34 INT C NEUR INF, P22243, DOI 10.5555/3495724.3497589
  • [10] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
    Chen, Tianlong
    Liu, Sijia
    Chang, Shiyu
    Cheng, Yu
    Amini, Lisa
    Wang, Zhangyang
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 696 - 705