Learning Video-Text Aligned Representations for Video Captioning

被引:10
作者
Shi, Yaya [1 ]
Xu, Haiyang [2 ]
Yuan, Chunfeng [3 ]
Li, Bing [3 ]
Hu, Weiming [3 ,4 ,5 ]
Zha, Zheng-Jun [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, 96 Jinzhai Rd, Hefei 230026, Anhui, Peoples R China
[2] Alibaba Grp, 969 Wenyi West Rd, Hangzhou 311121, Zhejiang, Peoples R China
[3] Chinese Acad Sci, Inst Automat, NLPR, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[5] CAS Ctr Excellence Brain Sci & Intelligence Techn, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
Video captioning; video-text alignment; aligned representation;
D O I
10.1145/3546828
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning requires that the model has the abilities of video understanding, video-text alignment, and text generation. Due to the semantic gap between vision and language, conducting video-text alignment is a crucial step to reduce the semantic gap, which maps the representations from the visual to the language domain. However, the existing methods often overlook this step, so the decoder has to directly take the visual representations as input, which increases the decoder's workload and limits its ability to generate semantically correct captions. In this paper, we propose a video-text alignment module with a retrieval unit and an alignment unit to learn video-text aligned representations for video captioning. Specifically, we firstly propose a retrieval unit to retrieve sentences as additional input which is used as the semantic anchor between visual scene and language description. Then, we employ an alignment unit with the input of the video and retrieved sentences to conduct the video-text alignment. The representations of two modal inputs are aligned in a shared semantic space. The obtained video-text aligned representations are used to generate semantically correct captions. Moreover, retrieved sentences provide rich semantic concepts which are helpful for generating distinctive captions. Experiments on two public benchmarks, i.e., VATEX and MSR-VTT, demonstrate that our method outperforms state-of-the-art performances by a large margin. The qualitative analysis shows that our method generates correct and distinctive captions.
引用
收藏
页数:21
相关论文
共 67 条
[1]   Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual Encoding for Video Captioning [J].
Aafaq, Nayyer ;
Akhtar, Naveed ;
Liu, Wei ;
Gilani, Syed Zulqarnain ;
Mian, Ajmal .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12479-12488
[2]  
[Anonymous], 2012, Association for Computational Linguistics
[3]   Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [J].
Bain, Max ;
Nagrani, Arsha ;
Varol, Gul ;
Zisserman, Andrew .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :1708-1718
[4]  
Banerjee S., 2005, P ACL WORKSHOP INTRI, P65, DOI DOI 10.3115/1626355.1626389
[5]  
Barbu Andrei., 2012, Proceedings of the Conference on Uncertainty in Artificial Intelligence UAI, P102
[6]  
Chen D.L., 2011, ACL, V1, P190
[7]  
Chen SX, 2019, AAAI CONF ARTIF INTE, P8191
[8]  
Chen XL, 2015, Arxiv, DOI arXiv:1504.00325
[9]  
Dai B, 2017, ADV NEUR IN, V30
[10]   A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching [J].
Das, Pradipto ;
Xu, Chenliang ;
Doell, Richard F. ;
Corso, Jason J. .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :2634-2641