CMMT: Cross-Modal Meta-Transformer for Video-Text Retrieval

被引:3
作者
Gao, Yizhao [1 ]
Lu, Zhiwu [1 ]
机构
[1] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Video-text retrieval; meta-learning; representation learning;
D O I
10.1145/3591106.3592238
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video-text retrieval has drawn great attention due to the prosperity of online video contents. Most existing methods extract the video embeddings by densely sampling abundant (generally dozens of) video clips, which acquires tremendous computational cost. To reduce the resource consumption, recent works propose to sparsely sample fewer clips from each raw video with a narrow time span. However, they still struggle to learn a reliable video representation with such locally sampled video clips, especially when testing on cross-dataset setting. In this work, to overcome this problem, we sparsely and globally (with wide time span) sample a handful of video clips from each raw video, which can be regarded as different samples of a pseudo video class (i.e., each raw video denotes a pseudo video class). From such viewpoint, we propose a novel Cross-Modal Meta-Transformer (CMMT) model that can be trained in a meta-learning paradigm. Concretely, in each training step, we conduct a cross-modal fine-grained classification task where the text queries are classified with pseudo video class prototypes (each has aggregated all sampled video clips per pseudo video class). Since each classification task is defined with different/new videos (by simulating the evaluation setting), this task-based meta-learning process enables our model to generalize well on new tasks and thus learn generalizable video/text representations. To further enhance the generalizability of our model, we induce a token-aware adaptive Transformer module to dynamically update our model (prototypes) for each individual text query. Extensive experiments on three benchmarks show that our model achieves new state-of-the-art results in cross-dataset video-text retrieval, demonstrating that it has more generalizability in video-text retrieval. Importantly, we find that our new meta-learning paradigm indeed brings improvements under both cross-dataset and in-dataset retrieval settings.
引用
收藏
页码:76 / 84
页数:9
相关论文
共 52 条
  • [31] Munkhdalai T, 2017, PR MACH LEARN RES, V70
  • [32] Nichol A, 2018, Arxiv, DOI arXiv:1803.02999
  • [33] Ordonez V., 2011, Advances in neural information processing systems, V24, P1143
  • [34] Patrick Mandela, 2021, ICLR
  • [35] Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models
    Plummer, Bryan A.
    Wang, Liwei
    Cervantes, Chris M.
    Caicedo, Juan C.
    Hockenmaier, Julia
    Lazebnik, Svetlana
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2641 - 2649
  • [36] Ravi S., 2017, INT C LEARN REPR
  • [37] Rouditchenko A, 2021, Arxiv, DOI arXiv:2006.09199
  • [38] Sharma P, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2556
  • [39] Snell J, 2017, ADV NEUR IN, V30
  • [40] Learning to Compare: Relation Network for Few-Shot Learning
    Sung, Flood
    Yang, Yongxin
    Zhang, Li
    Xiang, Tao
    Torr, Philip H. S.
    Hospedales, Timothy M.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1199 - 1208