Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

被引:12
作者
Wu, Haiping [1 ]
Wang, Xiaolong [2 ]
机构
[1] McGill Univ, Mila, Montreal, PQ, Canada
[2] Univ Calif San Diego, La Jolla, CA USA
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00999
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent works have advanced the performance of self-supervised representation learning by a large margin. The core among these methods is intra-image invariance learning. Two different transformations of one image instance are considered as a positive sample pair, where various tasks are designed to learn invariant representations by comparing the pair. Analogically, for video data, representations of frames from the same video are trained to be closer than frames from other videos, i.e. intra-video invariance. However, cross-video relation has barely been explored for visual representation learning. Unlike intra-video invariance, ground-truth labels of cross-video relation is usually unavailable without human labors. In this paper, we propose a novel contrastive learning method which explores the cross-video relation by using cycle-consistency for general image representation learning. This allows to collect positive sample pairs across different video instances, which we hypothesize will lead to higher-level semantics. We validate our method by transferring our image representation to multiple downstream tasks including visual object tracking, image classification, and action recognition. We show significant improvement over state-of-the-art contrastive learning methods. Project page is available at https://happywu.github.io/cycle_contrast_video
引用
收藏
页码:10129 / 10139
页数:11
相关论文
共 78 条
  • [1] Learning to See by Moving
    Agrawal, Pulkit
    Carreira, Joao
    Malik, Jitendra
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 37 - 45
  • [2] Alwassel H., 2019, ARXIV PREPRINT ARXIV
  • [3] [Anonymous], 2016, INT C LEARN REPR
  • [4] Bachman P, 2019, ADV NEUR IN, V32
  • [5] Recycle-GAN: Unsupervised Video Retargeting
    Bansal, Aayush
    Ma, Shugao
    Ramanan, Deva
    Sheikh, Yaser
    [J]. COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 122 - 138
  • [6] Benaim S., 2020, P IEEECVF C COMPUTER, P9922
  • [7] Fully-Convolutional Siamese Networks for Object Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Henriques, Joao F.
    Vedaldi, Andrea
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 850 - 865
  • [8] Improving Spatiotemporal Self-supervision by Deep Reinforcement Learning
    Buechler, Uta
    Brattoli, Biagio
    Ommer, Bjoern
    [J]. COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 797 - 814
  • [9] Caron M., 2020, ARXIV200609882, P1
  • [10] Chen Ting, ARXIV PREPRINT ARXIV