Holistic Features are almost Sufficient for Text-to-Video Retrieval

被引:4
作者
Tian, Kaibin [1 ]
Zhao, Ruixiang [1 ]
Xin, Zijie [1 ,2 ]
Lan, Bangxiang [1 ]
Li, Xirong [1 ]
机构
[1] Renmin Univ China, Key Lab DEKE, MoE, Beijing, Peoples R China
[2] Sichuan Univ, Coll Comp Sci, Chengdu, Peoples R China
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
关键词
D O I
10.1109/CVPR52733.2024.01622
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For text-to-video retrieval (T2VR), which aims to retrieve unlabeled videos by ad-hoc textual queries, CLIP-based methods currently lead the way. Compared to CLIP4Clip which is efficient and compact, state-of-the-art models tend to compute video-text similarity through fine-grained cross-modal feature interaction and matching, putting their scalability for large-scale T2VR applications into doubt. We propose TeachCLIP, enabling a CLIP4Clip based student network to learn from more advanced yet computationally intensive models. In order to create a learning channel to convey fine-grained cross-modal knowledge from a heavy model to the student, we add to CLIP4Clip a simple Attentional frame-Feature Aggregation (AFA) block, which by design adds no extra storage / computation overhead at the retrieval stage. Frame-text relevance scores calculated by the teacher network are used as soft labels to supervise the attentive weights produced by AFA. Extensive experiments on multiple public datasets justify the viability of the proposed method. TeachCLIP has the same efficiency and compactness as CLIP4Clip, yet has near-SOTA effectiveness.
引用
收藏
页码:17138 / 17147
页数:10
相关论文
共 50 条
[41]   Efficient text-to-video retrieval via multi-modal multi-tagger derived pre-screening [J].
Yingjia Xu ;
Mengxia Wu ;
Zixin Guo ;
Min Cao ;
Mang Ye ;
Jorma Laaksonen .
Visual Intelligence, 2025, 3 (1)
[42]   An Investigation into the Issues Concerning the Copyright of Content Generated by Text-to-Video AI [J].
Zhou Chunguang ;
Yi Jia .
Contemporary Social Sciences, 2024, 9 (05) :95-117
[43]   Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation [J].
Yariv, Guy ;
Gat, Itai ;
Benaim, Sagie ;
Wolf, Lior ;
Schwartz, Idan ;
Adi, Yossi .
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, :6639-6647
[44]   Multi-Conditional Generative Adversarial Network for Text-to-Video Synthesis [J].
Zhou R. ;
Jiang C. ;
Xu Q. ;
Li Y. ;
Zhang C. ;
Song Y. .
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (10) :1567-1579
[45]   Gender Bias in Text-to-Video Generation Models: A Case Study of Sora [J].
Nadeem, Mohammad ;
Sohail, Shahab Saquib ;
Cambria, Erik ;
Schuller, Bjorn W. ;
Hussain, Amir .
IEEE INTELLIGENT SYSTEMS, 2025, 40 (03) :10-15
[46]   Human Motion Aware Text-to-Video Generation with Explicit Camera Control [J].
Kim, Taehoon ;
Kang, ChanHee ;
Park, JaeHyuk ;
Jeong, Daun ;
Yang, ChangHee ;
Kang, Suk-Ju ;
Kong, Kyeongbo .
2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024, 2024, :5069-5078
[47]   HOW TEXT-TO-VIDEO TOOL SORA COULD SHAPE SCIENCE - AND SOCIETY [J].
O'Callaghan, Jonathan .
NATURE, 2024, 627 (8004) :475-476
[48]   T2VBench: Benchmarking Temporal Dynamics for Text-to-Video Generation [J].
Ji, Pengliang ;
Xiao, Chuyang ;
Tai, Huilin ;
Huo, Mingxiao .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2024, :5325-5335
[49]   VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models [J].
Jeong, Hyeonho ;
Park, Geon Yeong ;
Ye, Jong Chul .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, :9212-9221
[50]   Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models [J].
Ren, Yixuan ;
Zhou, Yang ;
Yang, Jimei ;
Shi, Jing ;
Liu, Difan ;
Liu, Feng ;
Kwon, Mingi ;
Shrivastava, Abhinav .
COMPUTER VISION - ECCV 2024, PT LXXXIX, 2025, 15147 :332-349