DiffusionRet: Generative Text-Video Retrieval with Diffusion Model

被引:18
作者
Jin, Peng [1 ,3 ]
Li, Hao [1 ,3 ]
Cheng, Zesen [1 ,3 ]
Li, Kehan [1 ,3 ]
Ji, Xiangyang [4 ]
Liu, Chang [4 ]
Yuan, Li [1 ,2 ,3 ]
Chen, Jie [1 ,2 ,3 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Peking Univ, Shenzhen Grad Sch, AI Sci AI4S Preferred Program, Shenzhen, Peoples R China
[4] Tsinghua Univ, Dept Automat & BNRist, Beijing, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
基金
国家重点研发计划;
关键词
D O I
10.1109/ICCV51070.2023.00234
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing text-video retrieval solutions are, in essence, discriminant models focused on maximizing the conditional likelihood, i.e., p(candidates|query). While straightforward, this de facto paradigm overlooks the underlying data distribution p(query), which makes it challenging to identify out-of-distribution data. To address this limitation, we creatively tackle this task from a generative viewpoint and model the correlation between the text and the video as their joint probability p(candidates, query). This is accomplished through a diffusion-based text-video retrieval framework (DiffusionRet), which models the retrieval task as a process of gradually generating joint distribution from noise. During training, DiffusionRet is optimized from both the generation and discrimination perspectives, with the generator being optimized by generation loss and the feature extractor trained with contrastive loss. In this way, DiffusionRet cleverly leverages the strengths of both generative and discriminative methods. Extensive experiments on five commonly used text-video retrieval benchmarks, including MSRVTT, LSMDC, MSVD, ActivityNet Captions, and DiDeMo, with superior performances, justify the efficacy of our method. More encouragingly, without any modification, DiffusionRet even performs well in out-domain retrieval settings. We believe this work brings fundamental insights into the related fields. Code is available at https://github.com/jpthu17/DiffusionRet.
引用
收藏
页码:2470 / 2481
页数:12
相关论文
共 73 条
[41]   Weakly-Supervised 3D Spatial Reasoning for Text-Based Visual Question Answering [J].
Li, Hao ;
Huang, Jinfa ;
Jin, Peng ;
Song, Guoli ;
Wu, Qi ;
Chen, Jie .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 :3367-3382
[42]  
Li Hao, 2023, ARXIV230510049
[43]  
Li Junnan, 2022, ARXIV220112086
[44]   Locality Guidance for Improving Vision Transformers on Tiny Datasets [J].
Li, Kehan ;
Yu, Runyi ;
Wang, Zhennan ;
Yuan, Li ;
Song, Guoli ;
Chen, Jie .
COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 :110-127
[45]   Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval [J].
Li, Pandeng ;
Xie, Hongtao ;
Ge, Jiannan ;
Zhang, Lei ;
Min, Shaobo ;
Zhang, Yongdong .
COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 :181-197
[46]  
Li PD, 2022, AAAI CONF ARTIF INTE, P1367
[47]  
Li Pandeng, 2023, Tech. Rep
[48]  
Li Xiang Lisa, 2022, arXiv:2205.14217
[49]  
Liang Chen, 2022, ARXIV221002025
[50]  
Liu Yang, 2019, BMVC