A Broad Study on the Transferability of Visual Representations with Contrastive Learning

被引:33
作者
Islam, Ashraful [1 ]
Chen, Chun-Fu [2 ,3 ]
Panda, Rameswar [2 ,3 ]
Karlinsky, Leonid [3 ]
Radke, Richard [1 ]
Feris, Rogerio [2 ,3 ]
机构
[1] Rensselaer Polytech Inst, Troy, NY 12181 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA USA
[3] IBM Res, Armonk, NY USA
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00872
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tremendous progress has been made in visual representation learning, notably with the recent success of self-supervised contrastive learning methods. Supervised contrastive learning has also been shown to outperform its cross-entropy counterparts by leveraging labels for choosing where to contrast. However, there has been little work to explore the transfer capability of contrastive learning to a different domain. In this paper, we conduct a comprehensive study on the transferability of learned representations of different contrastive approaches for linear evaluation, full-network transfer, and few-shot recognition on 12 downstream datasets from different domains, and object detection tasks on MSCOCO and VOC0712. The results show that the contrastive approaches learn representations that are easily transferable to a different downstream task. We further observe that the joint objective of self-supervised contrastive loss with cross-entropy/supervised-contrastive loss leads to better transferability of these models over their supervised counterparts. Our analysis reveals that the representations learned from the contrastive approaches contain more low/mid-level semantics than cross-entropy models, which enables them to quickly adapt to a new task. Our codes and models will be publicly available to facilitate future research on transferability of visual representations.(1)
引用
收藏
页码:8825 / 8835
页数:11
相关论文
共 53 条
[1]  
[Anonymous], 1990, NATO ASI Series
[2]  
[Anonymous], 2019, PR MACH LEARN RES
[3]  
[Anonymous], 2014, IEEE COMPUT SOC CONF, DOI [DOI 10.1109/CVPRW.2014.131, 10.1109/cvprw.2014.131]
[4]  
[Anonymous], Machine learning in python
[5]   Factors of Transferability for a Generic ConvNet Representation [J].
Azizpour, Hossein ;
Razavian, Ali Sharif ;
Sullivan, Josephine ;
Maki, Atsuto ;
Carlsson, Stefan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (09) :1790-1802
[6]  
Bradbury J., 2019, Advances in Neural Information Processing, VVolume 32, DOI DOI 10.48550/ARXIV.1912.01703
[7]  
Caron M., 2020, ARXIV200609882, P1
[8]   The devil is in the details: an evaluation of recent feature encoding methods [J].
Chatfield, Ken ;
Lempitsky, Victor ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]   Knowledge-guided Deep Reinforcement Learning for Interactive Recommendation [J].
Chen, Xiaocong ;
Huang, Chaoran ;
Yao, Lina ;
Wang, Xianzhi ;
Liu, Wei ;
Zhang, Wenjie .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,