Motion-aware Contrastive Video Representation Learning via Foreground-background Merging

被引:29
作者
Ding, Shuangrui [1 ]
Li, Maomao [2 ]
Yang, Tianyu [2 ]
Qian, Rui [3 ]
Xu, Haohang [1 ]
Chen, Qingyi [4 ]
Wang, Jue [2 ]
Xiong, Hongkai [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Tencent AI Lab, Shenzhen, Peoples R China
[3] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[4] Univ Michigan, Ann Arbor, MI 48109 USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00949
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In light of the success of contrastive learning in the image domain, current self-supervised video representation learning methods usually employ contrastive loss to facilitate video representation learning. When naively pulling two augmented views of a video closer, the model however tends to learn the common static background as a shortcut but fails to capture the motion information, a phenomenon dubbed as background bias. Such bias makes the model suffer from weak generalization ability, leading to worse performance on downstream tasks such as action recognition. To alleviate such bias, we propose Foreground-background Merging (FAME) to deliberately compose the moving foreground region of the selected video onto the static background of others. Specifically, without any off-the-shelf detector, we extract the moving foreground out of background regions via the frame difference and color statistics, and shuffle the background regions among the videos. By leveraging the semantic consistency between the original clips and the fused ones, the model focuses more on the motion patterns and is debiased from the background shortcut. Extensive experiments demonstrate that FAME can effectively resist background cheating and thus achieve the state-of-the-art performance on downstream tasks across UCF101, HMDB51, and Diving48 datasets. The code and configurations are released at https://github.com/Mark12Ding/FAME.
引用
收藏
页码:9706 / 9716
页数:11
相关论文
共 50 条
[31]   Video Foreground-Background Separation via Weighted Schatten-p Norm and Structured Sparsity Decomposition [J].
Wei Yufeng ;
Jing Mingli ;
Li Lan ;
Sun Kun ;
Fan Ruibo .
LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (08)
[32]   End-To-End Compression for Surveillance Video With Unsupervised Foreground-Background Separation [J].
Zhao, Yu ;
Luo, Dengyan ;
Wang, Fuchun ;
Gao, Han ;
Ye, Mao ;
Zhu, Ce .
IEEE TRANSACTIONS ON BROADCASTING, 2023, 69 (04) :966-978
[33]   Fully Motion-Aware Network for Video Object Detection [J].
Wang, Shiyao ;
Zhou, Yucong ;
Yan, Junjie ;
Deng, Zhidong .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :557-573
[34]   MAU: A Motion-Aware Unit for Video Prediction and Beyond [J].
Chang, Zheng ;
Zhang, Xinfeng ;
Wang, Shanshe ;
Siwei ;
Ye, Yan ;
Xiang, Xinguang ;
Gao, Wen .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
[35]   AUGMENTED ROBUST PCA FOR FOREGROUND-BACKGROUND SEPARATION ON NOISY, MOVING CAMERA VIDEO [J].
Gao, Chen ;
Moore, Brian E. ;
Nadakuditi, Raj Rao .
2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017), 2017, :1240-1244
[36]   Motion-Aware Feature Enhancement Network for Video Prediction [J].
Lin, Xue ;
Zou, Qi ;
Xu, Xixia ;
Huang, Yaping ;
Tian, Yi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) :688-700
[37]   MoVideo: Motion-Aware Video Generation with Diffusion Model [J].
Liang, Jingyun ;
Fang, Yuchen ;
Zhang, Kai ;
Timofte, Radu ;
Van Gool, Luc ;
Ranjan, Rakesh .
COMPUTER VISION-ECCV 2024, PT XLIV, 2025, 15102 :56-74
[38]   Improved Truncated Nuclear Norm and Its Application in Video Foreground-background Separation [J].
Yang, Yongpeng ;
Yang, Zhenzhen ;
Li, Jianlin ;
Fan, Lu .
Gongcheng Kexue Yu Jishu/Advanced Engineering Sciences, 2021, 53 (05) :219-226
[39]   Manet: motion-aware network for video action recognition [J].
Li, Xiaoyang ;
Yang, Wenzhu ;
Wang, Kanglin ;
Wang, Tiebiao ;
Zhang, Chen .
COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (03)
[40]   BAT: BEHAVIOR-AWARE TEMPORAL CONTRASTIVE VIDEO REPRESENTATION LEARNING [J].
Weng, Weihao ;
Zhu, Xin ;
Imaizum, Mitsuyoshi ;
Murono, Shigeyuki .
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024, 2024,