Video Captioning with Guidance of Multimodal Latent Topics

被引:52
作者
Chen, Shizhe [1 ]
Chen, Jia [2 ]
Jin, Qin [1 ]
Hauptmann, Alexander [2 ]
机构
[1] Renmin Univ China, Beijing, Peoples R China
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17) | 2017年
关键词
Video Captioning; Multimodal; Latent Topics; Multi-task;
D O I
10.1145/3123266.3123420
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The topic diversity of open-domain videos leads to various vocabularies and linguistic expressions in describing video contents, and therefore, makes the video captioning task even more challenging. In this paper, we propose an unified caption framework, M&M TGM, which mines multimodal topics in unsupervised fashion from data and guides the caption decoder with these topics. Compared to pre-defined topics, the mined multimodal topics are more semantically and visually coherent and can reflect the topic distribution of videos better. We formulate the topic-aware caption generation as a multi-task learning problem, in which we add a parallel task, topic prediction, in addition to the caption task. For the topic prediction task, we use the mined topics as the teacher to train a student topic prediction model, which learns to predict the latent topics from multimodal contents of videos. The topic prediction provides intermediate supervision to the learning process. As for the caption task, we propose a novel topic-aware decoder to generate more accurate and detailed video descriptions with the guidance from latent topics. The entire learning procedure is end-to-end and it optimizes both tasks simultaneously. The results from extensive experiments conducted on the MSR-VTT and Youtube2Text datasets demonstrate the effectiveness of our proposed model. M&M TGM not only outperforms prior state-of-the-art methods on multiple evaluation metrics and on both benchmark datasets, but also achieves better generalization ability.
引用
收藏
页码:1838 / 1846
页数:9
相关论文
共 47 条
[21]  
Dhillon I. S., 2004, P 10 ACM SIGKDD INT, P551, DOI DOI 10.1145/1014052.1014118
[22]   What Makes Paris Look like Paris? [J].
Doersch, Carl ;
Singh, Saurabh ;
Gupta, Abhinav ;
Sivic, Josef ;
Efros, Alexei A. .
ACM TRANSACTIONS ON GRAPHICS, 2012, 31 (04)
[23]   Learning Spatiotemporal Features with 3D Convolutional Networks [J].
Du Tran ;
Bourdev, Lubomir ;
Fergus, Rob ;
Torresani, Lorenzo ;
Paluri, Manohar .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4489-4497
[24]   YouTube2Text: Recognizing and Describing Arbitrary Activities Using Semantic Hierarchies and Zero-shot Recognition [J].
Guadarrama, Sergio ;
Krishnamoorthy, Niveda ;
Malkarnenkar, Girish ;
Venugopalan, Subhashini ;
Mooney, Raymond ;
Darrell, Trevor ;
Saenko, Kate .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :2712-2719
[25]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[26]  
Jin Q, 2016, P 24 ACM INT C MULT, P1087, DOI [DOI 10.1145/2964284.2984065, 10.1145/2964284.2984065]
[27]   Video Description Generation using Audio and Visual Cues [J].
Jin, Qin ;
Liang, Junwei .
ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, :239-242
[28]  
King DB, 2015, ACS SYM SER, V1214, P1
[29]  
Krizhevsky A., 2010, P MACH LEARN RES, P621
[30]  
Lee S, 2016, ADV NEUR IN, V29