MGMAE: Motion Guided Masking for Video Masked Autoencoding

被引:7
作者
Huang, Bingkun [1 ,2 ]
Zhao, Zhiyu [1 ,2 ]
Zhang, Guozhen [1 ]
Qiao, Yu [2 ]
Wang, Limin [1 ,2 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Shanghai AI Lab, Shanghai, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01241
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Masked autoencoding has shown excellent performance on self-supervised video representation learning. Temporal redundancy has led to a high masking ratio and customized masking strategy in VideoMAE. In this paper, we aim to further improve the performance of video masked autoencoding by introducing a motion guided masking strategy. Our key insight is that motion is a general and unique prior in video, which should be taken into account during masked pretraining. Our motion guided masking explicitly incorporates motion information to build temporal consistent masking volume. Based on this masking volume, we can track the unmasked tokens in time and sample a set of temporal consistent cubes from videos. These temporal aligned unmasked tokens will further relieve the information leakage issue in time and encourage the MGMAE to learn more useful structure information. We implement our MGMAE with an online efficient optical flow estimator and backward masking map warping strategy. We perform experiments on the datasets of Something-Something V2 and Kinetics-400, demonstrating the superior performance of our MGMAE to the original VideoMAE. In addition, we provide the visualization analysis to illustrate that our MGMAE can sample temporal consistent cubes in a motion-adaptive manner for more effective video pre-training.
引用
收藏
页码:13447 / 13458
页数:12
相关论文
共 59 条
[1]  
[Anonymous], 2020, INT C MACH LEARN
[2]  
Arnab Anurag, 2021, IEEE CVF INT C COMP
[3]   AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders [J].
Bandara, Wele Gedara Chaminda ;
Patel, Naman ;
Gholami, Ali ;
Nikkhah, Mehdi ;
Agrawal, Motilal ;
Patel, Vishal M. .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :14507-14517
[4]   STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID- [J].
Bao, Han ;
Zhou, Xun ;
Xie, Yiqun ;
Li, Yanhua ;
Jia, Xiaowei .
2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, :1-10
[5]   Depth-Aware Video Frame Interpolation [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Ma, Chao ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3698-3707
[6]  
Bertasius G, 2021, PR MACH LEARN RES, V139
[7]  
Chan K. C., 2022, P IEEE CVF C COMP VI, P5972, DOI DOI 10.48550/ARXIV.2104.13371
[8]   Continued Increases of Gross Primary Production in Urban Areas during 2000-2016 [J].
Cui, Yaoping ;
Xiao, Xiangming ;
Dong, Jinwei ;
Zhang, Yao ;
Qin, Yuanwei ;
Doughty, Russell B. ;
Wu, Xiaocui ;
Liu, Xiaoyan ;
Joiner, Joanna ;
Moore, Berrien, III .
JOURNAL OF REMOTE SENSING, 2022, 2022
[9]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848