Self-Supervised Visual Learning by Variable Playback Speeds Prediction of a Video

被引:17
作者
Cho, Hyeon [1 ]
Kim, Taehoon [1 ]
Chang, Hyung Jin [2 ]
Hwang, Wonjun [1 ]
机构
[1] Ajou Univ, Dept Artificial Intelligence, Suwon 16499, South Korea
[2] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
基金
新加坡国家研究基金会;
关键词
Three-dimensional displays; Task analysis; Visualization; Feature extraction; Semantics; Coherence; Sorting; Action recognition; representation learning; self-supervised learning;
D O I
10.1109/ACCESS.2021.3084840
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a self-supervised visual learning method by predicting the variable playback speeds of a video. Without semantic labels, we learn the spatio-temporal visual representation of the video by leveraging the variations in the visual appearance according to different playback speeds under the assumption of temporal coherence. To learn the spatio-temporal visual variations in the entire video, we have not only predicted a single playback speed but also generated clips of various playback speeds and directions with randomized starting points. Hence the visual representation can be successfully learned from the meta information (playback speeds and directions) of the video. We also propose a new layer-dependable temporal group normalization method that can be applied to 3D convolutional networks to improve the representation learning performance where we divide the temporal features into several groups and normalize each one using the different corresponding parameters. We validate the effectiveness of our method by fine-tuning it to the action recognition and video retrieval tasks on UCF-101 and HMDB-51. All the source code is released in https://github.com/hyeon-jo/PSPNet.
引用
收藏
页码:79562 / 79571
页数:10
相关论文
共 40 条
  • [1] Adam P., 2017, NEURAL INFORM PROCES
  • [2] [Anonymous], 2016, INSTANCE NORMALIZATI
  • [3] Ba J., 2016, ARXIV160706450, V1050, P21
  • [4] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [5] Benaim S., 2020, P IEEECVF C COMPUTER, P9922
  • [6] Improving Spatiotemporal Self-supervision by Deep Reinforcement Learning
    Buechler, Uta
    Brattoli, Biagio
    Ommer, Bjoern
    [J]. COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 797 - 814
  • [7] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [8] Self-Supervised GANs via Auxiliary Rotation Loss
    Chen, Ting
    Zhai, Xiaohua
    Ritter, Marvin
    Lucic, Mario
    Houlsby, Neil
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 12146 - 12155
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Unsupervised Visual Representation Learning by Context Prediction
    Doersch, Carl
    Gupta, Abhinav
    Efros, Alexei A.
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1422 - 1430