MAC: Masked Contrastive Pre-Training for Efficient Video-Text Retrieval

被引:1
|
作者
Shu, Fangxun [1 ]
Chen, Biaolong [1 ]
Liao, Yue [2 ]
Wang, Jinqiao [3 ,4 ]
Liu, Si [2 ]
机构
[1] Alibaba Grp, Beijing 100020, Peoples R China
[2] Beihang Univ, Inst Artificial Intelligence, Beijing 100083, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Redundancy; Computational modeling; Visualization; Training; Semantics; Feature extraction; Contrastive learning; end-to-end pretraining; masked modeling; video-text retrieval;
D O I
10.1109/TMM.2024.3402613
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a simple yet effective end-to-end Video-language Pre-training (VidLP) framework, Masked Contrastive Video-language Pre-training (MAC), for video-text retrieval tasks. Our MAC aims to reduce video representation's spatial and temporal redundancy in the VidLP model by a mask sampling mechanism to improve pre-training efficiency. Comparing conventional temporal sparse sampling, we propose to randomly mask a high ratio of spatial regions and only take visible regions into the encoder as sparse spatial sampling. Similarly, we adopt the mask sampling technique for text inputs for consistency. Instead of blindly applying the mask-then-prediction paradigm from MAE, we propose a masked-then-alignment paradigm for efficient video-text alignment. The motivation is that video-text retrieval tasks rely on high-level alignment rather than low-level reconstruction, and multimodal alignment with masked modeling encourages the model to learn a robust and general multimodal representation from incomplete and unstable inputs. Coupling these designs enables efficient end-to-end pre-training: 3x speed up, 60%+ computation reduction, and 4%+ performance improvement. Our MAC achieves state-of-the-art results on various video-text retrieval datasets including MSR-VTT, DiDeMo, and ActivityNet. Our approach is omnivorous to input modalities. With minimal modifications, we achieve competitive results on image-text retrieval tasks.
引用
收藏
页码:9962 / 9972
页数:11
相关论文
共 50 条
  • [1] MILES: Visual BERT Pre-training with Injected Language Semantics for Video-Text Retrieval
    Ge, Yuying
    Ge, Yixiao
    Liu, Xihui
    Wang, Jinpeng
    Wu, Jianping
    Shan, Ying
    Qie, Xiaohu
    Luo, Ping
    COMPUTER VISION - ECCV 2022, PT XXXV, 2022, 13695 : 691 - 708
  • [2] End-to-End Pre-Training With Hierarchical Matching and Momentum Contrast for Text-Video Retrieval
    Shen, Wenxue
    Song, Jingkuan
    Zhu, Xiaosu
    Li, Gongfu
    Shen, Heng Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5017 - 5030
  • [3] Using Multimodal Contrastive Knowledge Distillation for Video-Text Retrieval
    Ma, Wentao
    Chen, Qingchao
    Zhou, Tongqing
    Zhao, Shan
    Cai, Zhiping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5486 - 5497
  • [4] Expert-guided contrastive learning for video-text retrieval
    Lee, Jewook
    Lee, Pilhyeon
    Park, Sungho
    Byun, Hyeran
    NEUROCOMPUTING, 2023, 536 : 50 - 58
  • [5] Temporal Multimodal Graph Transformer With Global-Local Alignment for Video-Text Retrieval
    Feng, Zerun
    Zeng, Zhimin
    Guo, Caili
    Li, Zheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (03) : 1438 - 1453
  • [6] MimCo: Masked Image Modeling Pre-training with Contrastive Teacher
    Zhou, Qiang
    Yu, Chaohui
    Luo, Hao
    Wang, Zhibin
    Li, Hao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4487 - 4495
  • [7] Learning Coarse-to-Fine Graph Neural Networks for Video-Text Retrieval
    Wang, Wei
    Gao, Junyu
    Yang, Xiaoshan
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 2386 - 2397
  • [8] Exploiting Unlabeled Videos for Video-Text Retrieval via Pseudo-Supervised Learning
    Lu, Yu
    Quan, Ruijie
    Zhu, Linchao
    Yang, Yi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6748 - 6760
  • [9] Cross-Modal Contrastive Pre-Training for Few-Shot Skeleton Action Recognition
    Lu, Mingqi
    Yang, Siyuan
    Lu, Xiaobo
    Liu, Jun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9798 - 9807
  • [10] JM-CLIP: A JOINT MODAL SIMILARITY CONTRASTIVE LEARNING MODEL FOR VIDEO-TEXT RETRIEVAL
    Ge, Mingyuan
    Li, Yewen
    Wu, Honghao
    Li, Mingyong
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 3010 - 3014