Text-Video Retrieval via Multi-Modal Hypergraph Networks

被引:3
作者
Li, Qian [1 ]
Su, Lixin [1 ]
Zhao, Jiashu [2 ]
Xia, Long [1 ]
Cai, Hengyi [3 ]
Cheng, Suqi [1 ]
Tang, Hengzhu [1 ]
Wang, Junfeng [1 ]
Yin, Dawei [1 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
[2] Wilfrid Laurier Univ, Waterloo, ON, Canada
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024 | 2024年
关键词
text-video retrieval; multi-modal hypergraph; hypergraph neural networks;
D O I
10.1145/3616855.3635757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text-video retrieval is a challenging task that aims to identify relevant videos given textual queries. Compared to conventional textual retrieval, the main obstacle for text-video retrieval is the semantic gap between the textual nature of queries and the visual richness of video content. Previous works primarily focus on aligning the query and the video by finely aggregating word-frame matching signals. Inspired by the human cognitive process of modularly judging the relevance between text and video, the judgment needs high-order matching signal due to the consecutive and complex nature of video contents. In this paper, we propose chunk-level text-video matching, where the query chunks are extracted to describe a specific retrieval unit, and the video chunks are segmented into distinct clips from videos. We formulate the chunk-level matching as n-ary correlations modeling between words of the query and frames of the video and introduce a multi-modal hypergraph for n-ary correlation modeling. By representing textual units and video frames as nodes and using hyperedges to depict their relationships, a multimodal hypergraph is constructed. In this way, the query and the video can be aligned in a high-order semantic space. In addition, to enhance the model's generalization ability, the extracted features are fed into a variational inference component for computation, obtaining the variational representation under the Gaussian distribution. The incorporation of hypergraphs and variational inference allows our model to capture complex, n-ary interactions among textual and visual contents. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the text-video retrieval task.
引用
收藏
页码:369 / 377
页数:9
相关论文
共 26 条
[21]   TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval [J].
Liu, Yuqi ;
Xiong, Pengfei ;
Xu, Luhui ;
Cao, Shengming ;
Jin, Qin .
COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 :319-335
[22]   UMP: Unified Modality-Aware Prompt Tuning for Text-Video Retrieval [J].
Zhang, Haonan ;
Zeng, Pengpeng ;
Gao, Lianli ;
Song, Jingkuan ;
Shen, Heng Tao .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) :11954-11964
[23]   BiC-Net: Learning Efficient Spatio-temporal Relation for Text-Video Retrieval [J].
Han, Ning ;
Zeng, Yawen ;
Shi, Chuhao ;
Xiao, Guangyi ;
Chen, Hao ;
Chen, Jingjing .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (03)
[24]   Align and Tell: Boosting Text-Video Retrieval With Local Alignment and Fine-Grained Supervision [J].
Wang, Xiaohan ;
Zhu, Linchao ;
Zheng, Zhedong ;
Xu, Mingliang ;
Yang, Yi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :6079-6089
[25]   CMFG: Cross-Model Fine-Grained Feature Interaction for Text-Video Retrieval [J].
Zhao, Shengwei ;
Liu, Yuying ;
Du, Shaoyi ;
Tian, Zhiqiang ;
Qu, Ting ;
Xu, Linhai .
MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 :435-445
[26]   Exploiting Unlabeled Videos for Video-Text Retrieval via Pseudo-Supervised Learning [J].
Lu, Yu ;
Quan, Ruijie ;
Zhu, Linchao ;
Yang, Yi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 :6748-6760