Video and Text Matching with Conditioned Embeddings

被引:5
作者
Ali, Ameen [1 ]
Schwartz, Idan [2 ,3 ]
Hazan, Tamir [2 ]
Wolf, Lior [1 ]
机构
[1] Tel Aviv Univ, Tel Aviv, Israel
[2] Technion NetApp, Haifa, Israel
[3] NetApp, Sunnyvale, CA USA
来源
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022) | 2022年
关键词
LOCALIZATION;
D O I
10.1109/WACV51458.2022.00055
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a method for matching a text sentence from a given corpus to a given video clip and vice versa. Traditionally video and text matching is done by learning a shared embedding space and the encoding of one modality is independent of the other. In this work, we encode the dataset data in a way that takes into account the query's relevant information. The power of the method is demonstrated to arise from pooling the interaction data between words and frames. Since the encoding of the video clip depends on the sentence compared to it, the representation needs to be recomputed for each potential match. To this end, we propose an efficient shallow neural network. Its training employs a hierarchical triplet loss that is extendable to paragraph/video matching. The method is simple, provides explainability, and achieves state-of-the-art results for both sentence-clip and video-text by a sizable margin across five different datasets: ActivityNet, DiDeMo, YouCook2, MSR-VTT, and LSMDC. We also show that our conditioned representation can be transferred to video-guided machine translation, where we improved the current results on VATEX. Source code is available at https://github.com/AmeenAli/VideoMatch.
引用
收藏
页码:478 / 487
页数:10
相关论文
共 52 条
[1]  
[Anonymous], 2016, PROC C EMPIRICAL MET
[2]  
[Anonymous], 2014, SSST EMNLP
[3]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[4]   Video2Text: Learning to Annotate Video Content [J].
Aradhye, Hrishikesh ;
Toderici, George ;
Yagnik, Jay .
2009 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2009), 2009, :144-151
[5]   MUTAN: Multimodal Tucker Fusion for Visual Question Answering [J].
Ben-younes, Hedi ;
Cadene, Remi ;
Cord, Matthieu ;
Thome, Nicolas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2631-2639
[6]  
Braude Tom, 2021, ARXIV PREPRINT ARXIV
[7]  
Chen SX, 2019, AAAI CONF ARTIF INTE, P8199
[8]   Dual Encoding for Zero-Example Video Retrieval [J].
Dong, Jianfeng ;
Li, Xirong ;
Xu, Chaoxi ;
Ji, Shouling ;
He, Yuan ;
Yang, Gang ;
Wang, Xun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9338-9347
[9]  
Duan X, 2018, ADV NEUR IN, V31
[10]   Multi-modal Transformer for Video Retrieval [J].
Gabeur, Valentin ;
Sun, Chen ;
Alahari, Karteek ;
Schmid, Cordelia .
COMPUTER VISION - ECCV 2020, PT IV, 2020, 12349 :214-229