Self-Supervised Scene-Debiasing for Video Representation Learning via Background Patching

被引:12
作者
Assefa, Maregu [1 ]
Jiang, Wei [1 ]
Gedamu, Kumie [2 ]
Yilma, Getinet [1 ]
Kumeda, Bulbula [1 ]
Ayalew, Melese [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610054, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; background patching; label smoothing; scene-debiasing; self-supervised learning; video representation; NETWORKS;
D O I
10.1109/TMM.2022.3193559
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised learning has considerably improved video representation learning by discovering supervisory signals automatically from unlabeled videos. However, due to the scene-biased nature of existing video datasets, the current methods are biased to the dominant scene context during action inference. Hence, this paper proposes Background Patching (BP), a scene-debiasing augmentation strategy to alleviate the model reliance on the video background in a self-supervised contrastive manner. The BP reduces the negative influence of the video background by mixing a randomly patched frame to the video background. BP randomly crops four frames from four different videos and patches them to construct a new frame for each video separately. The patched frame is mixed with all frames of the target video to produce a spatially distorted video sample. Then, we use existing self-supervised contrastive frameworks to pull representations of the distorted and original videos closer together. Moreover, BP mixes the semantic labels of patches with the target video's label, resulting in the regularization of the contrastive model to soften the decision boundaries in the embedding space. Therefore, the model is explicitly constrained to suppress the background influence by emphasizing more on the motion changes. The extensive experimental results show that our BP significantly improved the performance of various video understanding downstream tasks including action recognition, action detection, and video retrieval.
引用
收藏
页码:5500 / 5515
页数:16
相关论文
共 63 条
[1]   Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition [J].
Ahsan, Unaiza ;
Madhok, Rishi ;
Essa, Irfan .
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, :179-189
[2]   Self-Supervised Multi-Label Transformation Prediction for Video Representation Learning [J].
Assefa, Maregu ;
Jiang, Wei ;
Yilma, Getinet ;
Kumeda, Bulbula ;
Ayalew, Melese ;
Seid, Mohammed .
JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (09)
[3]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[4]  
Chen Ting, 2019, PMLR
[5]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[6]  
Choi Jinwoo, 2019, Advances in Neural Information Processing Systems, V32
[7]   TCLR: Temporal contrastive learning for video representation [J].
Dave, Ishan ;
Gupta, Rohit ;
Rizve, Mamshad Nayeem ;
Shah, Mubarak .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 219
[8]  
Diba Ali, 2021, P IEEE CVF INT C COM, P1502
[9]   Unsupervised Visual Representation Learning by Context Prediction [J].
Doersch, Carl ;
Gupta, Abhinav ;
Efros, Alexei A. .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1422-1430
[10]   A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Xiong, Bo ;
Girshick, Ross ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3298-3308