Multi-Task Video Captioning with Video and Entailment Generation

被引:63
作者
Pasunuru, Ramakanth [1 ]
Bansal, Mohit [1 ]
机构
[1] Univ N Carolina, Chapel Hill, NC 27515 USA
来源
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1 | 2017年
关键词
D O I
10.18653/v1/P17-1117
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailing caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task.
引用
收藏
页码:1273 / 1283
页数:11
相关论文
共 39 条
[1]  
[Anonymous], 2007, NIPS
[2]  
[Anonymous], 2014, EACL
[3]  
[Anonymous], 2014, COLING
[4]  
[Anonymous], 2016, ICLR
[5]  
[Anonymous], 2004, TEXT SUMMARIZATION B
[6]  
[Anonymous], 2015, P 3 INT C LEARN REPR
[7]  
[Anonymous], 2016, CVPR
[8]  
[Anonymous], 1997, Neural Computation
[9]  
[Anonymous], 2014, Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)
[10]  
[Anonymous], 2012, ICML